Sponsorship | The Community | Sign Up
Friday’s issue is in partnership with…
____________
🔓 Friday’s AI Report:
🚨 OpenAI creating alarmist safety culture?
🚀 In partnership with Miso Robotics
📜 Anthropic advises Whitehouse on AI safety
🔍 Amazon-backed start-up has poor labor standards?
🕚 In partnership with Eleventh AI
⚙️ Trending AI Tools
🏗️ Practical AI Applications
📑 Recommended Resources
Read Time: 5 minutes
‼️ The latest episode of the AI Report podcast just dropped! Zayd Syed Ali, the founder of Valley—a revolutionary AI sales tool—talks about why the ‘spray and pray’ approach no longer works and how sales teams can leverage AI-driven insights to target the right prospects at the right time.
If you’re involved in sales, you really don’t want to miss this one.
➡️ Watch it here ⬅️
The AI Accelerator launches on March 11th…
Top AI courses, 1-on-1 consulting, ChatNode access & much more.
Just 4 days to go
📈 AI STOCK TRACKER
The US tech sector experienced a significant downturn yesterday, with major indices like the Nasdaq Composite falling by 2.6%, which was largely driven by investor concerns over global trade tensions and disappointing revenue guidance from companies like Marvell Technology, leading to substantial losses in prominent tech stocks, including NVIDIA, Tesla, and TSM.
LATEST IN AI
🚨 Our Report — High-profile, former OpenAI head of policy research, Miles Brundage, has accused OpenAI of “rewriting the history” about its old approach to releasing potentially risky AI systems—such as GPT2 (a release which Brundgae was heavily involved with)—and creating a culture where raising safety concerns is seen as “alarmist.”
🔓 Key Points:
OpenAI recently outlined a ‘new’ approach to AI safety, shifting from a “discontinuous world” approach—leading to over-caution with releases—to a “continuous path” that involves “iteratively deploying and learning.”
“In a discontinuous world, safety lessons came from treating systems with outsized caution, which is the approach we took for GPT‑2,” but to make the next system safe, “we need to continuously learn.”
Brundage argued that GPT-2 warranted “abundant caution” (at the time), and OpenAI released the model “incrementally, with lessons shared at each step,” meaning the GPT-2 release already aligned with this ‘new’ approach.
🔐 Relevance — Brundage has taken umbrage with OpenAI’s recent policy document because he believes that (aswell as speaking untruths), OpenAI is “poo-pooing” caution, which he worries will create an environment where legitimate concerns about AI safety might be downplayed, labeled as exaggerated, or seen as “alarmist” meaning people will need “overwhelming evidence of imminent dangers,” to do anything, which is a “very dangerous” mentality for advanced AI systems,” but will speed up product releases, which would help OpenAI’s competitiveness, and its bottom line.
TOGETHER WITH MISO ROBOTICS
ChatGPT turned AI from a concept to a household name.
Well, NVIDIA CEO Jensen Huang says robotics is a “multitrillion-dollar” market with a breakthrough moment “just around the corner.”
Look no further than NVIDIA collaborator Miso Robotics.
Their newest AI-powered kitchen robot, Flippy Fry Station, is 2X faster, 50% smaller, and 75% quicker to install than its predecessor. Major brands like White Castle and Jack in the Box are already lining up for the tech.
And now, with signed orders, new customers on the way, and NVIDIA as a collaborator, they’re scaling this tech at exactly the right time.
Best of all?
Unlike OpenAI, you can currently invest in Miso – but you're officially in the final weeks of being able to do so. Share in Miso’s growth as an investor while you still can.
This is a paid advertisement for Miso Robotic’s Regulation CF offering. Please read the offering circular at https://invest.misorobotics.com
🚨 Our Report — AI start-up, Anthropic (makers of ChatGPT rival, Claude), has submitted a set of AI policy recommendations—centred on six key areas, including testing, security, export controls, and energy infrastructure—to the White House in response to its request for a plan to “better prepare Amercia to capture the economic benefits of AI,” positioning it as a leader in AI—while also addressing national security concerns.
🔓 Key Points:
Anthropic supports Biden’s AI Safety Institute—as it’s crucial for AI safety research—and wants the National Institute of Standards and Technology to develop security evaluations to address potential vulnerabilities.
In the interest of national security and maintaining global power, it called for stronger export controls for advanced AI chips—asking for particularly tough restrictions on NVIDIA H20 chip exports to China.
It also recommended that, by 2027, the US should dedicate an additional 50 gigawatts of energy to fuel AI data centers to meet the growing demand for computational power and help advance AI developments.
🔐 Relevance — Anthropic has a reputation for prioritizing AI safety and has shown commitment to the responsible development of AI, so these recommendations will likely carry a lot of weight within the government, especially as they appear to strike a balance between pushing America’s leadership in AI and risk mitigation, however experts have noted that several of the recommendations align with former President Biden’s AI Exective Order (aimed to promote responsible AI development), which Trump revoked on his first day as President, as the reporting requirements were “overly burdensome,” so it will be interesting to see how the Whitehouse will respond to these recommendations.
The US Department of Labor (DoL) is investigating Scale AI, an AI data labelling start-up—founded by Alexander Wang, and backed by Amazon and NVIDIA—for its compliance with fair labor standards.
The law addresses issues like unpaid wages and worker misclassification, and an investigation was triggered after Scale received lawsuits from workers claiming they were denied benefits like overtime pay and sick leave.
Scale AI has strongly disputed the lawsuits, but the DoL has established that—while the case could be dismissed—employers found to have violated the law “may be subject to fines and potentially imprisonment.”
TOGETHER WITH ELVENTH AI
Your time is too valuable to be spent on inefficiencies.
What if AI and automation could unlock huge time and cost savings, allowing your team to focus on strategic growth?
In just 3-5 weeks, Eleventh AI’s Audit provides you with:
Actionable workflow improvements powered by AI and automation
An AI roadmap to identify and prioritize high-value efficiency opportunities
An ROI analysis to forecast cost savings and productivity gains
Our AI audits have identified opportunities that have helped over 100 enterprises save 80,000+ hours of manual work without the need to hire additional staff.
Curious about the reports we’ve made for others and if your business qualifies, too?
TRENDING AI TOOLS
Brilliant: Future-proof your skills in minutes a day with a library of quick, interactive lessons in AI, programming, logic, data science, and more. Try it for nothing, for 30 days ⭐⭐⭐⭐⭐ / 5 (Trustpilot)
Werd uses AI to provide a stream of content ideas, tailored to your audience
NoteX transforms lengthy content into key insights
PRACTICAL AI
Prompt InspirationAfter typing this prompt, you will get tools and techniques to help you prioritize tasks and manage your workload. P.S. Use the Prompt Engineer GPT by The AI Report to 10x your prompts. | What are some effective techniques for prioritizing tasks and managing a busy workload? |
PRO COMMUNITY
If you’re looking for ways to use AI to cut time and make more, come join our Pro Community: It’s a space filled with like-minded people, all sharing ideas, insights, and inspiration around AI.
“To date, the information from this effort has been timely and invaluable… thanks to the AI Report and the awesome community.”
📰 Each week, you’ll get a premium AI-focused newsletter with in-depth case studies, practical ways to utilize AI, and expert resources to further your knowledge.
🟢 You’ll get exclusive access to a community with over 1,000 AI professionals, plus countless cheat sheets, tool recommendations, and prompt databases.
🕐 And, you’ll get early access to podcast episodes, live events, and accreditation modules worth over $500, for nothing.
Subscribe to our Pro Community for premium networking opportunities, tools, insights, and tactics.
RECOMMENDED RESOURCES
STARTUPS
Name: Arthur AI
Value: $145M
Funding raised: $63M (Series B)
Arthur AI is an AI-powered monitoring and bias detection platform that provides tools to help track the performance of AI models, identify issues—like bias—help find anomalies, and make AI model maintenance more efficient.
PODCASTS
In this episode of The AI Report Live, Liam dives deep into the future of AI-powered sales automation with Zayd Syed Ali—founder of Valley, a revolutionary AI sales tool reshaping the outbound process.
QUICK HITS
ASK THE EXPERTS
PICK THE BRAINS OF OUR AI EXPERTS Ask us anything, and the top question will be answered every Friday. |
Hit reply and tell us what you want more of!
Got a friend who needs to learn more about AI? Sign them up for The AI Report here.
Until next time, Martin, Liam, and Amanda.
P.S. Unsubscribe if you don’t want us in your inbox anymore.