AI-ready data; Google’s AI Overview suggests eating rocks; running with scissors; "fake” data; OpenAI announces conversational GPT model
After last month’s cookie saga with Google and new TikTok legislation, one might have thought May would be tame in comparison. Hold onto your horses, because the industry is still on fire!
This may sound like common sense, but generative AI is only as strong as the data that feeds it. We say this often because it’s true: good data in equals good data out. If GenAI is fed inaccurate, outdated data, the intel gained from it will be skewed, if not outright false. If businesses are making decisions and acting on information that is proven to be untrue, they’re not just wasting their time, they’re wasting valuable resources.
According to a report by Forrester Research, “when AI models make incorrect predictions — often by detecting false patterns in training data — they cost advertisers more than they save […] the models are
only as accurate as the data on which they’re trained, so they can reinforce biases and inaccuracies as often as they generate performance-improving insights.”
VentureBeat’s recent article, reinforces this idea, noting, “proper data collection is not just a task — it’s foundational to the future of any business’ intelligence. When data is accurately captured and managed, AI systems can operate at their full potential, leading to cutting-edge insights and predictive analytics.”
AI can and should be used, but not without solid data as the foundation.
This month, Google took center stage again, this time rolling out AI Overviews — an experiment, previously called “Search Generative Experience” that allows users opted in to Google Labs to see additional generative AI responses to their plain-language queries. These responses are a custom-built bundle of text and links that have been, well, hilarious, absurd and, perhaps, even slightly troubling.
Fast Company reported on “The 7 most shocking Google AI answers we’ve seen so far,” and we are here for it. After all, failed experiments often lead to fantastic breakthroughs. However, be weary of misinformation and always check your sources. Just because AI says it is so, doesn’t mean we should believe it. The internet is a vast place and if something sounds questionable, it might just be.
Here are our personal favorites and the queries that inspired them.
By now we’ve all heard of “fake” news, but you may not have heard of “fake” data. TechCrunch published an article about a company that is “boosting” survey results using synthetic data and AI-generated responses. Surveys are contingent on people. People cost money.
With quality data once again taking the spotlight, AI may feasibly be able to offer insights using machine learning based on smaller datasets. This isn’t altogether new – according to the author, synthetic data dates back to the days of early computing when it was used to test software and algorithms. Today, data is used to train models, and this beta test aims to “address both data scarcity issues as well as data privacy concerns by using artificially generated data that contains no sensitive information.”
This doesn’t mean that real data isn’t used. On the contrary, Adweek says, “for synthetic data to exist, models still need access to real data. The New York Times cautions, however, that, AI models “pick up on the biases that appear in the internet data from which they have been trained. So if companies use A.I. to train A.I., they can end up amplifying their own flaws.”
Like all experimental AI in this stage, take it with a grain of salt, question responses, look for biases and misinformation and – surprise – make sure the data is reliable.
Remember last month when we talked about GenAI fatigue? There may be fatigue, but we’re certainly not finished talking about it. And why should we? Things are changing fast, and it truly is a transformative era.
GPT-4o, the latest model from Open AI is getting smarter! No, really. Conversational models can now see, hear and analyze in real time are a here. MediaPost reports, “this new model can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API.”
Creepy or cool? You can be the judge.
We love a friendly competition. With influencer marketing continuing to grow and marketers allocating more of their ad budget to this category, they are keeping even tighter tabs on their competitors. According to Influencer Marketing Hub data, via Digiday, “Influencer marketing ad spend is expected to hit $24 billion by the end of 2024.”
This means a thorough analysis of what is and isn’t working in the space. Oscar Wilde said, “imitation is the sincerest form of flattery,” if a brand is doing well, you can bet people are paying attention and making not.
“As marketers dig into analysis of competitors’ efforts it’s not just that they want to be able to understand what competitors are doing but have a better understanding of how something may work before spending major ad dollars on it.”
Observation, analysis and awareness will be key in this area and it is fair to assume that there will be even more developments in how one asseses success of influencer marketing as time goes on.
Thank you for reading! We have some news in the works in the coming months and hope you come back for the next edition of our monthly trends watch.
As always, we invite you to sign up for our newsletter and get the latest news directly in your inbox!
Courtney is a seasoned communications and public relations professional with 17+ years of experience working in both the public and private sectors in diverse leadership roles. As Data Axle’s Senior Public Relations Manager, she is intently focused on elevating the company’s media relations presence and increasing brand loyalty and awareness through landing coverage in top-tier media outlets.