On February 2, 2024, famed singer-songwriter and actor Lainey Wilson sat before a House Judiciary field hearing in Los Angeles, California. She may not have been performing on a stage, but she gave voice to what so many people, especially artists and musicians, have felt for some time.
“I do not have to tell you how much of a gut punch it is to have your name, your likeness, or your voice ripped from you and used in ways that you can never imagine or would never allow. It is wrong. Plain and simple.”
Wilson was referring to using AI to exploit her work and public image. In June 2023, Wilson’s and fellow country singer Luke Combs’ likenesses were used in an online advert for keto weight loss gummies. The ads included AI-generated conversations between the two, overlayed onto authentic video. It was an alleged attempt to cash in on their names and deceive viewers into purchasing the product under a false endorsement.
She isn’t alone. By the time of that hearing in LA, countless people, from musicians to journalists, had spoken up all over the Internet about AI using their work without their permission, sparking outrage, fear, concern, and a plethora of lawsuits.
Wilson’s words summarize the betrayal and frustration that has been building to a fever pitch over the past few years as big tech companies have scraped every last corner of the Internet for data to train their AI models.
The situation came to a head on April 6, 2024, when a bombshell New York Times report revealed that OpenAI, Google, and Meta have flirted with feeding their AI models copyrighted works, regardless of the risk of legal backlash. The report highlighted the iceberg of data hiding beneath large language models, like ChatGPT and Google Gemini, and the lines big tech companies are willing to cross to get even more data.
As OpenAI, Google, and Meta battle for dominance in the AI arms race, everyone who uses the Internet is caught in the middle.
The real cost of AI innovation so far has been our collective data privacy.
You are what you eat: AI’s endless hunger for data
AI data scraping has exploded over the past few years due to heated competition for dominance in the AI market. Large language models need massive amounts of data to learn how to duplicate realistic speech, generate images, translate languages, and more.
That data has to come from somewhere, but AI developers are running out of sources.
According to the New York Times report, the industry-leading AI developers (OpenAI, Google, and Meta) have already consumed nearly all of the “high-quality data” available online. That includes news articles, stock photos, research papers, and fan fiction.
Of course, AI data scraping started with freely available data, such as Creative Commons content and Wikipedia articles. However, by 2021, the massive well of data on the Internet was running dry, pushing AI developers to bend the rules (and their morals).
For example, Google unveiled a controversial update to its privacy policy that will go into effect on the weekend of July 4, 2023. The company potentially hopes that most people will be too busy to notice during Independence Day festivities.
The privacy policy update massively increased the scope of how Google could use “information that’s publicly available online or from other public sources,” potentially even including Google Docs and data in Google’s other free office apps. The update allowed Google to use this data not just for Google Translate, but for Google’s AI models in general, including Gemini, formerly called Bard.
Similarly, the Times reported that OpenAI used a speech recognition tool called “Whisper” to mine data from YouTube videos, which directly violates the copyright on many videos.
If you thought Google would step up to stop this and protect users on its platform, think again. According to the Times report, Google allowed OpenAI’s practice to continue out of concern Google itself would also get investigated for doing the same thing.
Earlier this year, Meta even toyed with buying the major publishing house Simon & Schuster to use authors’ work to train its AI.
AI’s bottomless hunger for data has reached a point where seemingly no one and nothing is safe. Is it too late to reverse this trend and preserve data privacy in the age of AI?
Time is running out to find an ethical path forward for AI
As Lainey Wilson put it in her remarks at the February 2 judiciary field hearing: “It’s not just artists who need protecting. The fans need it, too.”
Something needs to be done, and soon. Legislation and regulation are the lynchpin in the fight for data privacy against AI. The European Union has already passed the world’s most comprehensive regulatory framework for AI, but a similar bill has yet to appear in the U.S., despite the formation of an AI task force earlier this year.
Despite a lack of federal action, organizations and activists all over the country are stepping up and speaking up.
At the February 2 judiciary field hearing, Lainey Wilson was representing the Human Artistry Campaign, an alliance of dozens of creative organizations calling for policies to protect creative professionals and their fans from artificial intelligence.
Likewise, the American Civil Liberties Union and Algorithmic Justice League have called for action on racial bias in AI due to biased training data. They’re just the tip of the iceberg.
Some organizations are taking things into their own hands to stop AI from using their data. For example, The Guardian announced in 2023 that it was blocking OpenAI from scraping its website for training data. Countless creative professionals and organizations are also suing AI developers for copyright infringement.
As we near the bottom of the seemingly endless well of data for AI to gobble up online, the clock is ticking to protect all of us from the abuse and misuse of our data. Organizations like those above may be the only thing standing between aggressive data mining and the privacy rights of billions.
If you’re anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you’ll need a powerful and performative laptop to keep up to speed with your needs.
At Laptop Mag, we review laptops year-round to ensure we’re giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
So, if you’re shopping for a new laptop and looking to invest in an AI PC (or just a great laptop in general), check out our current top-tier picks below.
Best Mac for AI
We love the MacBook Air 13 M3. Starting at just $1,099 (MSRP), with education pricing dropping to $999 (MSRP), the Air is a laptop we can recommend for just about any purpose. It’s affordable, especially by Apple standards, and it features an excellent keyboard, fantastic performance, and outstanding endurance (over 15 hours of battery life), which makes it a great laptop for just about anyone’s needs, especially those interested in getting to grips with all of the latest Apple Intelligence features.
Best Windows AI PC
The Asus Zenbook S 14 (UX5406) has quickly become our favorite AI PC laptop of the year, offering all the hallmarks of a great buy, including exceptional performance and battery life. This laptop is one of the first to feature an Intel Core Ultra 200V series processor and at just $1,499 (MSRP), you get a fantastic balance of power, a stunning 14-inch OLED display, effortless multitasking, NPU-enhanced performance for AI tasks, and all of the additional Copilot+ features available with Windows 11.