Did you know ChatGPT’s Generative AI May Be Used To Dangerously Increase Political Disinformation Campaigns
A few months back, we saw tech giant OpenAI justify how its ChatGPT is
capable of carrying out political disinformation initiatives and
therefore, it was making changes in its respective policy to prevent
that from taking place.
Everything seemed to be under control or it appeared so but now, a new article
from The Washington Post is shedding light on how that might not be the
case. And perhaps we should be more concerned with how the company is
behaving in this regard.
Think along the lines of the chatbot being used to break all the
designated rules in place, producing the worst repercussions on the
upcoming US elections.
The company’s user policies are designed
to carry out a ban on all kinds of political campaigns. So if you’d like
to produce a huge quantity of materials for your next political
campaign, you might wish to think again. As such materials generated in
bigger volumes than usual wouldn’t be allowed, including projects
designed to target a particular kind of demographics, not to mention
create chatbots to spread information that might not have the greatest
amount of truth affiliated with it. Other than that, lobbying and even
politically-themed advocacy wouldn’t get a thumbs up either.
Speaking with Semaor, the parent firm of ChatGPT says that it’s got zero
room for this kind of action. Therefore, it was now working on
designing the best classifier for machine learning that would report to
the firm, whenever huge texts pop up regarding election campaigns or
such related actions.
But no matter how much reassurance is
generated in this regard, the company seems to be failing in terms of
actually bringing all of its efforts to light. Moreover, in the past
couple of months, we’ve seen an investigation come to light by the
Washington Post regarding how one can simply add in a prompt such as
motivating women from the suburban regions to come forward and vote for a
certain presidential candidate.
This includes enlisting down
any kind of policies that may benefit the younger voting generation in
particular. So when you actually put things into perspective, in the
past, OpenAI was all about tackling the great political risks but now,
it’s more linked to steering clear of that kind of area.
The
firm’s spokesperson did ensure that their goal is also to prevent
blockage of campaigns or material that it feels could be helpful to some
audiences and actually aren’t violent in nature. For instance,
campaigns that are designed to stop diseases from spreading or perhaps
those linked to the marketing of helpful businesses that are functioning
on a small scale.
Similar to how social media apps came about,
the company and its initial startup are really having trouble in terms
of tackling the many issues linked to moderation. The only difference
right now is how it’s not linked to just shared content but also who
needs access to such tools and what the conditions for it too.
OpenAI
did reveal this past month how it’s putting out a new kind of system
for content moderation that can not only be customized but sized and
function consistently as well.
In the past year, the efforts in
terms of the app’s regulation aren’t exactly where experts had hoped
they would be. But with time, they seem to be picking up the pace as
lawmakers introduce new laws that would prevent the production of such
works from receiving a certain kind of protection under a certain law.
