top of page
Search

Technology and The Threat to Global Elections

Updated: Apr 10, 2024


Gillian Tett, moderator/columnist, the Financial Times, Maria Ressa, Nobel Peace Prize-Winning journalist, European Commission Vice-President for European Commission’s Values and Transparency Věra Jourová and Secretary Hillary Rodham Clinton, former U.S. presidential candidate, at "AI’s Impact on the 2024 Global Elections” event on March 28, 2024

(left to right).




Technology and The Threat to Global Elections


As the 2024 U.S. presidential election gets closer, my anxiety about extremists, racists and nation states spreading lies, propaganda and hate about candidates, voters and election workers intensifies.


I have been the victim of a hate campaign during which women and men harassed and bullied me as I shopped for my family, walked on public sidewalks and attended doctors’ appointments in northern New Jersey for years. Through their vitriolic personal attacks, I learned that my harassers said I belonged to a specific political party, which was a lie.


When Secretary Hillary Rodham Clinton, former U.S. presidential candidate, Maria Ressa, Nobel Peace Prize-winning journalist, and European Commission Vice-President for Values and Transparency Věra Jourová shared their experiences about getting defamed through fake videos and attacked on social media during “AI’s Impact on the 2024 Global Elections” event on March 28, 2024, I felt a weird kinship with these famous and accomplished women. 


Presented by Columbia University’s School of International and Public Affairs’ Institute of Global Politics and Aspen Digital, the program featured Rodham Clinton (who is Professor of International and Public Affairs at Columbia University), Ressa and Jourová who discussed Russian disinformation and misinformation campaigns and “hidden manipulation” that targeted Rodham Clinton and Jourová and elections in the United States, Slovakia, Poland and Czechoslovakia and online attacks against Ressa.


Dominika Hajdu, policy director, Center for Democracy & Resilience, GLOBSEC, Taiwan AI Labs founder Ethan Tu and Javier Pallero, Argentinian digital rights researcher and activist, reviewed fake social media accounts created in Taiwan by China, Russian disinformation and information manipulation across Central and Eastern Europe and voters’ distrust of Argentinian politics, politicians and institutions. They also spoke about the lack of regulation of social media platforms in Taiwan and Argentina and the EU’s Digital Services Act.


Jigsaw (Google) CEO Yasmin Green, Microsoft Threat Analysis Center’s General Manager Clint Watts and Meta’s Director of Global Threat Disruption David Agranovich explained that they monitor, moderate, detect and remove synthetic content, accounts and fake videos, images and audio and influence operations on the internet and social media.


Michigan’s Secretary of State Jocelyn Benson, former Secretary of Homeland Security Michael Chertoff, Dara Lindenbaum, commissioner, the U.S. Federal Election Commission, Anna Makanju, vice president of OpenAI Global Affairs, and Eric Schmidt, cofounder, Schmidt Futures and former Google CEO and chairman, discussed how AI technology could cause harm at a large scale and actions that governments, private companies and other stakeholders can take to combat harmful AI technology, state actors, misinformation and disinformation. 


How Do We Stop Evil Tech and People from Poisoning Future Elections?


Speakers called for the end of and/or revision of Section 230, the use digital watermarking or stenography, digital signatures and blockchain technology to trace and verify information, images and video.


They also called for bipartisan legislation and/or regulatory frameworks, increased content moderation on social media platforms, holding algorithm publishers accountable legally, teaching others critical thinking, collaboration between AI, social media and tech companies, election officials and civil society, private companies cross-referencing information and information-sharing between the federal and state authorities and the public to address adversaries using AI and disinformation and misinformation to destabilize elections worldwide.

 

“I mean for Americans, get rid of Section 230. Because the biggest problem we have is that there is impunity, right? Stop the impunity. Tech companies will say they will self-regulate. Self regulation comes from news organizations when we were in charge of gatekeeping the public sphere, but we were not only just self-regulating, there were legal boundaries. If we lie, you file a suit. Right now, there's absolute impunity and America hasn't passed anything. I joke that the EU won the race of the turtles in filing legislation that will help us. It's too slow for the fast lightning pace of tech and the people who pay the price are us, us..”(Maria Ressa)


“….Section 230 has to go. We need a different system under which tech companies, and we're mostly talking obviously about the social media platforms, operate. And I for one think they will continue to make an enormous amount of money if they change their algorithms to prevent the kind of harm that is caused by sending people to the lowest common denominator every time they log on. You've got to stop this reward for this kind of negative virulent content, which affects us across the board. But I will say it is particularly focused on women. The empowerment of misogyny online has really caused so much fear and led to some violence against women who are willing to take a stand no matter who they are. Are they in entertainment? Are they academics? Are they in politics or journalism? Wherever they are. And the kind of ganging up effect that comes from online. 


It could only be a small handful of people in St. Petersburg or Moldova or wherever they are right now who are lighting the fire. But because of the algorithms, everybody gets burned. And we have got to figure out how to remove the impunity, come up with the right form of liability and do what we can to try to change the algorithms. And the final thing I would say is we also need to pass some laws that understand that this is the new assault on free speech. In our country people yell, 'free speech,' they have no idea what they're talking about half the time. And they yell at to stop an argument, to stop a debate, to prevent legislation from passing. We need a much clearer idea of what it is we are asking governments to do, businesses to do in the name of do no harm. And free speech has always had limitations, always been subject to legislative action and judicial oversight. And we need to get back into that arena.” (Hillary Rodham Clinton)


“But maybe for the EU, it is easier to legislate the digital space because look at the situation. While the United States have to make a big jump, we were kind of ready for that because count with me, illegal content, hate speech, child pornography, terrorism, violent extremism, racism, xenophobia, antisemitism. We have all these things in our criminal laws for decades. This is nothing new. So when we started to think about how to legislate a digital space, we in fact said, what is illegal offline has to be handled as illegal online.”  (Věra Jourová)


“Well, I mean there's some technological tools. For example, there is now an effort to do watermarking, a video and audio where a genuine video or audio when it's created has an encrypted mark such that anybody who looks at it can validate that it is real and it's not fake. More than that, we've got to teach people about critical thinking and evaluation so they can crosscheck.


When you get a story that appears to stand alone, look to see what are the other stories? Is anybody else picking it up? And we need to actually establish trusted voices that are deliberately very careful and very scientific about the way they validate and test things. And finally, I think we've got to teach even in the schools, and this is going to start with kids, critical thinking and values — what it is that we care about and why truth matters, why honor matters, why ethics matters, and then to have them bring that into the way they read and look at things that occur online. This is not going to be an easy task, but I do think we need to engage everybody in this process, not just people who are professionals and make it part of the mandate for civil society over the next year or two.”

(Michael Chertoff)


“And we do hope the federal government joins us in banning the deceptive use, intentionally deceptive use of artificial intelligence to confuse people about candidates, their positions or how to vote or where to vote or anything regarding elections. So we've drawn a line in the sand. It's a crime to intentionally disseminate through the use of AI deceptive information about our elections. Secondly, we've required the disclaimers and disclosure of any type of information generated by artificial intelligence that's focused on elections. So for example, one of the things we're worried about is, and we know because of AI, it could be targeted to a citizen on their phone getting a text saying, ‘Here's the address of your polling place on Election Day. Don't go there because there's been a shooting and stay tuned for more information.’…


So, in addition to passing these laws, we are setting up voter confidence councils, building out these trusted voices so that faith leaders, business leaders, labor leaders, community leaders, sports leaders, education leaders can be poised, even mayors and local election officials to be aware and push back with trusted information.” (Jocelyn Benson) 


“Let me just be obnoxious. I've sat through all of these trust and safety discussions for a long time, and these are very, very thoughtful analyses. They're not reducing solutions in their analysis that are implementable by the companies in a coherent way. So here's my proposal: Identify the people, understand the provenance of the data, publish your algorithms, be held as a legal matter, that your algorithms are what you said they are, right? In other words, what you said you do, you actually have to do, reform Section 230, make sure you don't have kids, so forth, et cetera. Make your proposals, but make them in a way that are implementable by the team. So for example, if there's a particular kind of piece of information that you think should be banned, write a specification of it well enough that under your proposal, the computer company can stop that. That's where it all fails because the engineers are busy doing what they understand. They're not talking to the lawyers too much. The lawyer's job is basically prevent anything from happening because they're afraid of liability, and you don't have leadership from the Congress for the reasons that you know, and that's why we're stuck.” (Eric Schmidt)


 
 
 

Comments


bottom of page