When I first got the brief from the team at BBDO in Dubai I didn’t know what to think.
They wanted to build an ‘AI President’ for Lebanon.
The team that created Newspapers within the Newspaper for An Nahar, Lebanon’s oldest newspaper, which won a Grand Prix last year at the Cannes Lions festival of creativity, was looking to push the envelope with their client.
Lebanon had been without a President for over a year. The situation was fraught. A financial crisis. A political crisis. A geo-political crisis. And systemic issues that expressed themselves through corrupt political actors, with each faction looking out for itself at the expense of the whole.
What would happen if we trained an AI to do the job of Lebanon’s president?
To find out, and more importantly, to spark a conversation, we created an AI that’s powered by An Nahar’s 90-years of investigative journalism. Their entire archive, going back to the 1930’s, was fed into an AI system powered by a cutting-edge LLM that uses the information to think through problems and identify potential solutions.
Last week An Nahar launched the AI President via a TV interview with their editor-in-chief. The AI President’s political platform, generated through the AI we trained, was printed in the newspaper's print edition, and this week we made OurPresident.ai open to the public.
The solutions are not perfect. AI cannot predict the future. There’s no guarantee that the solutions will work. Or even be implemented. But the answers are thoughtful and rooted in data. And more importantly they are free from the biases of sectarianism and corruption that plague Lebanese politics.
To me, it raises a fascinating question.
We often have these hypothetical debates about ‘AI’ versus ‘humans’, and when doing so, we imagine a platonic ideal of the human. An expert. Free from bias. Competing against a system that’s generalized, plagued by the bias of our past, and prone to misinformation. But is this always a fair comparison?
When we think about our politicians in the United States, for example, are they really acting without ‘bias’? Are they truly motivated beyond their own self interest? Are they telling us the truth? Politics, like so many of our institutions, is plagued by bias, hallucinations, and alignment problems.
Now to be clear I am not advocating that AI should take over the government.
But as AI becomes more powerful, could it pressure us to become better humans? Encourage us to get our collective act together? Motivate us to act with less bias, and self interest? Remind us to stay grounded in the truth?
AI is getting better. Can we?
[Disclaimer: This post reflects my personal views and in no way represents the views of the other organizations mentioned in the piece whom we collaborated with]
This is really cool, and I appreciate the courage shown by the client org in exploring the concept.
What an amazing story- fascinating stuff....and lots and lots to think about...thanks for sharing.