Synthetic Research With GPT-3
We put the AI behind ChatGPT to work on thousands of articles about the latest trends.
This post includes a live demo that you can try here.
Generative AI technologies like ChatGPT offer incredible potential to help people discover answers and insights. But purely generative models present challenges, especially when it comes to conducting research in a professional setting. They have a limited understanding of current events and culture. They are prone to bullshitting their way through an answer, and because they can’t cite their sources, you don’t always know when they are bullshitting.
One way to overcome these limitations is to feed an AI model like GPT-3 (the foundation model that powers ChatGPT) source materials that are specific to the area you are researching, up-to-date, and accurate. Using this technique, the AI model can become a powerful research assistant, helping you discover answers while citing primary sources.
To demonstrate this in action, we set up an "AI research assistant" that aggregates public content from the websites of companies like McKinsey, Mintel, Kantar, and others, indexes the information, and uses it to synthesize responses to a prompt from the user.
To explore the benefits of this approach, we’ll compare our Synthetic Research Assistant to ChatGPT using the same question:
How will Generative AI impact the world of business?
First, let's look at how ChatGPT responds:
Generative AI has the potential to impact the world of business in several ways:
Product design and innovation: AI-powered generative design tools can automate and optimize product design, leading to faster and more efficient innovation processes.
Customization and personalization: AI algorithms can generate personalized products, services, and experiences, helping businesses better serve customers' unique needs and preferences.
Supply chain optimization: AI-powered generative algorithms can help businesses optimize their supply chain operations, reducing costs and increasing efficiency.
Advertising and marketing: Generative AI can create custom advertisements and marketing content, allowing businesses to more effectively target specific demographics and improve ROI.
Fraud detection: AI-powered generative models can help detect and prevent fraud by identifying unusual patterns of behavior and transactions.
It's important to note that while generative AI has the potential to bring significant benefits to businesses, there are also ethical and regulatory considerations that must be taken into account.
The answer is compelling on the surface. But when you look closer, it seems like ChatGPT doesn’t understand the difference between “AI” and “Generative AI”. This is not surprising given the recent emergence of this topic. And while it appears that many of the use cases it cites, such as Supply Chain Optimization and Fraud Detection, are not use cases for Generative AI, there's no way to check its sources for accuracy.
In contrast, let's look at how our Synthetic Research Assistant responds with the help of high-quality sources:
Generative AI is a rapidly developing technology that has the potential to revolutionize the world of business. Big Tech companies have already begun to embrace generative AI as a potential money-maker, with Google and Meta releasing text-to-video and video-to-text programs, respectively. NVIDIA also entered the generative AI arena with its eDiff-I, which creates larger, high resolution images with less processing power.
Generative AI has been predicted by Sequoia Capital to “generate trillions of dollars of economic value” due to its ability to augment digital content creators’ and advertisers’ work. Companies like Shutterstock are rushing to be at “the forefront of AI art” by integrating DALL-E 2 into its content platform, planning to make it available to stock image customers globally. This could open up much needed revenue streams for struggling tech companies in the current economic downturn.
However, there are ethical and legal liability concerns associated with commercializing AI art that must be taken into consideration when using this technology for business purposes. Reports about AI producing problematic content could amplify worry about generative adversarial networks (GANs) triggering an existential crisis for society. In addition, broad commercial and consumer adoption of generative AI programs could become a sustainability pitfall due to cloud computing's data center proliferation already sucking up resources and fueling carbon emissions.
Overall, Generative AI offers great potential for businesses looking for new ways to provide value for their customers beyond just offering data and insights; however, caution should be taken when utilizing this technology in order ensure ethical use while avoiding any legal or environmental consequences that may arise from its widespread adoption.
The AI then goes on to cite the primary source materials it used to generate the answer.
Meta, Google, and startups flock to AI art’s economic potential
https://www.insiderintelligence.com/content/meta-google-startups-flock-ai-art-s-economic-potential
Generative AI made its public debut in 2022—it could be an internet earthquake in 2023
https://www.insiderintelligence.com/content/generative-ai-made-its-public-debut-2022-it-could-internet-earthquake-2023
7 ways brands, marketers, and retailers will use generative AI like ChatGPT and Lensa in the next few years
https://www.insiderintelligence.com/content/7-ways-brands-marketers-retailers-will-use-generative-ai
While Tech Fawns Over AI, Consumers Need More Convincing
https://morningconsult.com/2022/12/07/tech-fawns-over-artificial-intelligence-consumers-need-convincing/
The Metaverse in 2040
https://www.pewresearch.org/internet/2022/06/30/the-metaverse-in-2040/
There are certainly limitations to this approach. Identifying and aggregating high-quality source materials can be time-consuming, and the additional data adds latency and computational costs to the system. But in professional settings, it opens up the opportunity to put Generative AI to work in more reliable and trustworthy ways.
We think Synthetic Research Assistants like this one will become commonplace in the coming years. They can save tons of time spent on desk research and deck writing. But even more compelling is that by creating and curating the source materials for the AI to analyze, human expertise plays a more central role in setting the agenda and enabling new use cases.
Try it out for yourself here. And as always, let us know what you think by replying to this email.
It’s an interesting thought experiment. There are parallels with the paradigm that took place in the previous era with Google and the open web. There’s a benefit to content being public, mainly visibility, seo etc. In the MaaS era, while this public content may be used to train models, many of those models/systems (like Bing) will be capable of citing sources and driving awareness and visibility to the publisher. So while it will certainly introduce new questions around monetization (why visit a website in the first place when a model gives you all the answers you need), there will still be strong incentives to create public content. This is a long way of saying we think things will change - for example publishers will need to find new means to monetize as more people rely on AI for answers rather than visiting websites - but also to some extent they will remain the same in that publishing free content will still be table stakes for driving engagement with a publisher via AI or otherwise.
Do you see a mass pay-walling of currently accessible content in favour of new proprietary models as companies recognize the value long tail content has in a MaaS (models as a service) economy?