On February 20, 2024, a rare moment of bipartisan unity washed over the Capitol Hill building in Washington, D.C. as Speaker of the House Mike Johnson (R-LA) and House Minority Leader Hakeem Jeffries (D-NY) announced the formation of an AI task force to craft a framework for AI regulation.
Bringing together representatives from both sides of the aisle, Congress’ task force would address growing concerns regarding the unregulated rise of AI and its potential, and proven, impact on the American population. However, since February, Congress’ task force has yet to deliver.
Inquiring about the status of the report, a representative for task force co-Chairman Ted Lieu (D-CA) tells Laptop Mag: “We won’t be able to comment before the release of the report, which is expected to come out before the end of the year.” However, a representative for Chairman Jay Obernolte (R-CA) was even less forthcoming, telling Laptop Mag, “Unfortunately, I won’t be able to get a statement.”
Whether Congress’s AI Task Force is remembered for its action or inaction remains to be seen. However, there’s no doubt that something needs to hold AI platforms accountable for their potential impact on society and the very real risk they pose to the job market, Internet safety, and online misinformation.
Congress eyes the risks and rewards of AI
The task force’s composition reflects Obernolte’s comments during a September 2023 POLITICO AI & Tech summit panel where he offered a hint of how Congress would need to come together to tackle AI, “It has to be bipartisan and it has to be bicameral because the last thing that anyone wants is that every four years when the balance of power changes a little bit, the government’s approach to AI changes.”
As such, following Chairman Jay Obernolte and co-Chairman Ted Lieu, the remaining members of Congress’ AI task force are composed of 22 members evenly selected from each side of the aisle.
The task force’s formation was preceded by several AI-centric controversies, including attempts to dissuade voters in New Hampshire’s Democratic primary election using robocalls in January, which imitated President Joe Biden.
However, while the FCC was quick to designate the illegality of AI-generated voices in phone calls, the AI task force’s goals are further reaching and arguably more important. It revealed that AI had become a paramount concern by early 2024, one significant enough that Democrats and Republicans alike agreed Congress needed to take action.
Following the task force’s launch in February 2024, Obernolte outlined its goals in a press release, explaining, “As new innovations in AI continue to emerge, Congress and our partners in [the] federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.”
Lieu also shared his support, highlighting the tenuous balance of promise and pitfalls in AI development: “AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI.”
This task force has been the clearest indicator of the U.S. government finally realizing the potential impact AI could have on the future of the country and the world. Whether that impact is for better or worse depends, in part, on how the government handles the risks of AI and supports its potential to improve lives.
The cloudy future of AI regulation
It’s no small feat bringing representatives from the Democratic and Republican parties together. However, the wheel of democracy turns slowly, and now several months removed from the task force’s formation, there has been little to show in terms of results.
In a September 2024 POLITICO Tech Live podcast, Obernolte shone a light on the task force’s progress, sharing “We are well along on our charted task, which is to, by the end of the year, develop a report detailing a proposed Federal regulatory framework for AI.”
However, Obernolte was also quick to set expectations “This is not going to be one, 3,000-page AI bill like the European Union passed last year, and then we’re done. Problem solved, we don’t have to worry about this again.”
It would appear that Congress’ AI task force has the long game in mind when it comes to AI, with Obernolte explaining, “I think that AI is a complicated enough topic and a topic that is changing quickly enough that it merits an approach of incrementalism.
“I think we have to accept that the job of regulating AI is not going to be one 3000-age bill, it’s going to be a few bills a year for the next ten years as we get our arms around this issue.”
Congress may be taking its time to deliberate AI’s looming regulation but the slow and steady approach risks leaving the task force perpetually behind as developers continue to push the boundaries of what AI can accomplish at a blistering pace.
The past year has seen an explosion in AI development with new models from Meta, Google, OpenAI, and Apple competing for users and market dominance. All the while, major issues like deepfakes, misinformation, the impact of AI on academic integrity, and job security have gone largely unresolved.
These issues pose a serious threat to user safety across the Internet, compounding the existing risk of AI’s impact on the job market. While those risk factors go unanswered, the positives of AI are left tainted and overshadow the real ways it can help people all over the world. Government regulation may not be a Holy Grail for AI safety, but it is an important piece of the puzzle.
Elsewhere in the political landscape, by June, an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued by the White House in October 2023, outlining 270 actions the current administration hoped to implement to address these issues, gained the support of several major AI industry figures — including Apple, Google, Microsoft, Meta, and OpenAI.
However, as 2024 nears its end, Capitol Hill’s AI task force is left tight on time to tackle AI.
If you’re anything from an AI enthusiast to the average AI tinkerer (or simply seeking out some of the additional features offered through Windows Copilot+ PCs or Apple Intelligence on Macs), then you’ll need a powerful and performative laptop to keep up to speed with your needs.
At Laptop Mag, we review laptops year-round to ensure we’re giving you expert-backed and up-to-date recommendations on which notebook is right for you. When it comes to the best AI PC category, our top picks are the excellent Asus Zenbook S 14 (UX5406) for Windows users and the impressive Apple Macbook Air M3 for those running macOS.
So, if you’re shopping for a new laptop and looking to invest in an AI PC (or just a great laptop in general), check out our current top-tier picks below.
Best Windows AI PC
The Asus Zenbook S 14 (UX5406) has quickly become our favorite AI PC laptop of the year, offering all the hallmarks of a great buy, including exceptional performance and battery life. This laptop is one of the first to feature an Intel Core Ultra 200V series processor and, at just $1,499 (MSRP), you get a fantastic balance of power, a stunning 14-inch OLED display, effortless multitasking, NPU-enhanced performance for AI tasks, and all of the additional Copilot+ features available with Windows 11.
Best Mac for AI
We love the MacBook Air 13 M3. Starting at just $1,099 (MSRP), with education pricing dropping to $999 (MSRP), the Air is a laptop we can recommend for just about any purpose. It’s affordable, especially by Apple standards, and it features an excellent keyboard, fantastic performance, and outstanding endurance (over 15 hours of battery life), which makes it a great laptop for just about anyone’s needs, especially those interested in getting to grips with all of the latest Apple Intelligence features.