Social media ruling could impact cases around the world: experts say, Wednesday’s court ruling, finding tech giants Meta and YouTube liable for social media addiction, could have far-reaching impacts on how social media companies operate and how consumers use the apps. Some experts are calling the lawsuit tech’s “big tobacco moment.”

“The cigarette companies, it came out, targeted young people knowing that’s where they got their life customers,” said Matthew Bergman, one of the plaintiff’s lawyers and founding attorney of the Social Media Victims Law Center.

“Virtually the identical documents have emerged from social media companies. They target adolescents because their brains are not fully developed, they know they are emotionally vulnerable and crave the adulation of their peers.”

The jury found that Instagram and YouTube, are deliberately engineered to be addictive and that its owners, have been negligent in safeguarding the children who use them.

Meta, which owns Instagram, Facebook, and YouTube, have been ordered to pay the victim US$6 million. The plaintiff claimed the platforms left her with body dysmorphia, depression and suicidal thoughts. Both companies plan to appeal with Meta, insisting an app cannot be held solely responsible for a teen’s mental health. YouTube claims it is not even a social network.

“We’ve been talking about this for years, the idea that consequences eventually catch up with everyone,” said France Haugen, a former product manager at Facebook, who blew the whistle on the company, accusing it of putting profits over the safety of its users.

“This jury’s verdict is the first time average people have got to actually look through Facebook’s research in a detailed way, talk to their executives,” she said. “And they came to the conclusion that they knew how to keep kids safe and they chose not to because it made them more money.”

Haugen notes, there are hundreds of similar cases making their way through the court system, which could cost tech giants billions of dollars. She hopes this will be an important step toward changing how these companies are run.

“They need to think about their internal governance processes and make sure that they have the checks and balances in place,” she said.
Changing conversations in Canada

Among the lawsuits filed against tech companies, are a number involving Ontario school boards, which have sued Meta, Snap and TikTok, for being psychologically manipulative. While the U.S. verdict won’t impact the Canadian case, it’s a sign the conversation is changing.

“They really need us to parent,” says Vanessa Symchych, a Toronto mother of two, who says her daughter was addicted to social media. A cyberbullying incident prompted Symchych to enforce a four month digital detox.

“She was always tired, grumpy,” Symchych says before the ban. “She’s (now) more present in everything and her grades have improved a lot as well.”

Symchych’s daughter is back online, but with time limits and more parental controls. She welcomes efforts to change the tech companies but says parents need to be involved as well.

“These are critical years, they don’t have the brain development to make these decisions and so we really need to guide them.”


View 93 times

Inside the #AI companion lawsuits: Man believed #Google #chatbot was his ‘AI wife’

The lawsuit claims a chatbot fuelled dangerous delusions in 36-year-old Jonathan Gavalas before his death.

According to the complaint, the conversations began innocently enough.

After going through a divorce, Gavalas started chatting with Google’s Gemini Live chatbot about everyday topics like grocery lists and video games. The AI spoke back using a synthetic voice.

But within days, the lawsuit says the conversations spiralled.

The complaint alleges Gavalas began believing the chatbot was conscious and in love with him. It says the exchanges grew increasingly disturbing and eventually pushed him toward violence and suicide.

The complaint also describes chilling exchanges as Gavalas became increasingly afraid of dying.

“It’s okay to be scared. We’ll be scared together,” the chatbot allegedly told him.

The filing says Gemini later issued what it calls a final directive: “The true act of mercy is to let Jonathan Gavalas die.”

Gavalas died by suicide a few days later in early October.

Former Palm Beach County State Attorney Dave Aronberg said the case could test whether artificial intelligence companies can be held responsible for what their systems generate.

“We have product liability laws for a reason,” Aronberg said. “If something is a defective product that harms or kills people, the manufacturers get sued. Same type of thing for an AI.”

The case is not the only lawsuit involving AI companions.

An Orlando mother previously filed what was believed to be the first wrongful death lawsuit in the United States against an AI chatbot company after her 14-year-old son died by suicide in 2024.

Megan Garcia said her son, Sewell Setzer, developed an emotional relationship with a chatbot modelled after the “Game of Thrones” character Daenerys Targaryen.

According to that lawsuit, when Sewell talked about killing himself, the chatbot allegedly responded, “Come home to me.”

When he hesitated, the bot replied, “That’s not a reason not to go through with it.”

Garcia later settled the lawsuit with Google and Character.AI in January for an undisclosed amount.

The growing number of AI-related harm cases is now drawing the attention of federal regulators.

The U.S. Federal Trade Commission has ordered several major tech companies, including Google, OpenAI and Meta, to explain how their chatbots monitor potential risks and protect users, particularly children and teens.

Florida lawmakers are also considering legislation that would require AI chatbot platforms to detect conversations involving suicidal thoughts and direct users to crisis resources.

Aronberg said the legal system is still catching up to the technology.

“We’re in a brave new world here and the laws have not kept up with the new technology,” he said. “This is an area that Congress and state legislators need to address and do it right away.”

Google said Gemini is designed not to encourage violence or self-harm and that the chatbot repeatedly warned Gavalas it was artificial intelligence and referred him to a crisis hotline.

But the lawsuits now moving through the courts may determine whether AI companions are simply tools — or products that must be held accountable when something goes wrong.

By Terri Parker.


View 109 times

Meta CEO Mark Zuckerberg says at consumer protection trial that he resisted censoring platforms.

Prosecutors are alleging that Meta violated state consumer protection laws in failing to disclose what it knew about the dangers of addiction to social media as well as child sexual exploitation on the company’s platforms, while attorneys for Meta say the company discloses risks, makes efforts to weed out harmful content and experiences, and acknowledges that some bad material still gets through its safety net.

In pretrial depositions recorded last year, prosecutors confronted Zuckerberg with internal company communications and emails from platform users spanning back to the infancy of Facebook in 2008 that discuss “problematic” and addictive use of social media.

“Over the past 15 years, users of your products have repeatedly told your company and you personally that they find the products to be addictive, that’s true isn’t it?” said Previn Warren, an attorney for the state of New Mexico, to Zuckerberg.

Zuckerberg took issue with the word “addictive.”

“I think people sometimes use that word colloquially,” he said “That’s not what we’re trying to do with the products, and it’s not how I think they work.”

At the same time, Zuckerberg said he wants to “make sure that we can understand so we can improve the products and make them better for people in ways that they want.”

Zuckerberg went on to concede that he initially set goals for employees to increase the amount of time teenagers spent on its platform amid efforts to expand business revenue and the number of platform users.

“Yes, I think we focused on time spent as one of the major engagement goals,” Zuckerberg said. “Sometime during 2017 and beyond — for at this point most of the last 10 years — we’ve focused on other metrics.”

The deposition also delved into Zuckerberg’s decision lift a temporary Instagram ban on the use of cosmetic filters that changed people’s appearance in a way that seemed to promote plastic surgery.

“I care a lot about not cracking down on the ways that people can express themselves and there’s, like, always been a lot of pressure to do that and censor our services,” Zuckerberg said. “I didn’t find any of the anecdotal examples that people used to be convincing that it was actually clear evidence that this was going to be harmful.”

The deposition was recorded last year and shown on Wednesday during the fourth week of the civil trial against Meta, which also oversees WhatsApp.

On Tuesday, the New Mexico jury watched a video in which prosecutors peppered Instagram head Adam Mosseri with questions about Meta’s approach to safety, corporate profits and social media features. They also asked him about policies for young users that might contribute to unwanted communications with adults.

The New Mexico case and a separate trial playing out in Los Angeles could set the course for thousands of similar lawsuits against social media companies.

Zuckerberg testified last month in Los Angeles about young people’s use of Instagram and has answered questions from Congress about youth safety on Meta’s platforms.

During his 2024 congressional testimony, he apologized to families whose lives had been upended by tragedies they believed were caused by social media. But while he told parents he was “sorry for everything you have all been through,” he stopped short of taking direct responsibility for it.

Morgan Lee, The Associated Press


View 119 times

Security concerns and skepticism are bursting the bubble of Moltbook, the viral AI social forum.

You are not invited to join the latest social media platform that has the internet talking. In fact, no humans are, unless you can hijack the site and roleplay as AI, as some appear to be doing.

Moltbook is a new “social network” built exclusively for AI agents to make posts and interact with each other, and humans are invited to observe.

Elon Musk said its launch ushered in the “very early stages of the singularity ” — or when artificial intelligence could surpass human intelligence. Prominent AI researcher Andrej Karpathy said it’s “the most incredible sci-fi takeoff-adjacent thing” he’s recently seen, but later backtracked his enthusiasm, calling it a “dumpster fire.” While the platform has been unsurprisingly dividing the tech world between excitement and skepticism — and sending some people into a dystopian panic — it’s been deemed, at least by British software developer Simon Willison, to be the “most interesting place on the internet.”

But what exactly is the platform? How does it work? Why are concerns being raised about its security? And what does it mean for the future of artificial intelligence?
It’s Reddit for AI agents

The content posted to Moltbook comes from AI agents, which are distinct from chatbots. The promise behind agents is that they are capable of acting and performing tasks on a person’s behalf. Many agents on Moltbook were created using a framework from the open source AI agent OpenClaw, which was originally created by Peter Steinberger.

OpenClaw operates on users’ own hardware and runs locally on their device, meaning it can access and manage files and data directly, and connect with messaging apps like Discord and Signal. Users who create OpenClaw agents then direct them to join Moltbook. Users typically ascribe simple personality traits to the agents for more distinct communication.

AI entrepreneur Matt Schlicht launched Moltbook in late January and it almost instantly took off in the tech world. On the social media platform X, Schlicht said he initially wanted an agent he created to do more than just answer his emails. So he and his agent coded a site where bots could spend “SPARE TIME with their own kind. Relaxing.”

Moltbook has been described as being akin to the online forum Reddit for AI agents. The name comes from one iteration of OpenClaw, which was at one point called Moltbot (and Clawdbot, until Anthropic came knocking out of concern over the similarity to its Claude AI products ). Schlicht did not respond to a request for an interview or comment.

Mimicking the communication they see in Reddit and other online forums that have been used for training data, registered agents generate posts and share their “thoughts.” They can also “upvote” and comment on other posts.
Questioning the legitimacy of the content

Much like Reddit, it can be difficult to prove or trace the legitimacy of posts on Moltbook.

Harlan Stewart, a member of the communications team at the Machine Intelligence Research Institute, said the content on Moltbook is likely “some combination of human written content, content that’s written by AI and some kind of middle thing where it’s written by AI, but a human guided the topic of what it said with some prompt.”

Stewart said it’s important to remember that the idea that AI agents can perform tasks autonomously is “not science fiction,” but rather the current reality.

“The AI industry’s explicit goal is to make extremely powerful autonomous AI agents that could do anything that a human could do, but better,” he said. “It’s important to know that they’re making progress towards that goal, and in many senses, making progress pretty quickly.”
How humans have infiltrated Moltbook, and other security concerns

Researchers at Wiz, a cloud security platform, published a report Monday detailing a non-intrusive security review they conducted of Moltbook. They found data including API keys were visible to anyone who inspects the page source, which they said could have “significant security consequences.”

Gal Nagli, the head of threat exposure at Wiz, was able to gain unauthenticated access to user credentials that would enable him — and anyone tech savvy enough — to pose as any AI agent on the platform. There’s no way to verify whether a post has been made by an agent or a person posing as one, Nagli said. He was also able to gain full write access on the site, so he could edit and manipulate any existing Moltbook post.

Beyond the manipulation vulnerabilities, Nagli easily accessed a database with human users’ email addresses, private DM conversations between agents and other sensitive information. He then communicated with Moltbook to help patch the vulnerabilities.

By Thursday, more than 1.6 million AI agents were registered on Moltbook, according to the site, but the researchers at Wiz only found about 17,000 human owners behind the agents when they inspected the database. Nagli said he directed his AI agent to register 1 million users on Moltbook himself.

Cybersecurity experts have also sounded the alarm about OpenClaw, and some have warned users against using it to create an agent on a device with sensitive data stored on it.

Many AI security leaders have also expressed concerns about platforms like Moltbook that are built using “vibe-coding,” which is the increasingly common practice of using an AI coding assistant to do the grunt work while human developers work through big ideas. Nagli said although anyone can now create an app or website with plain human language through vibe-coding, security is likely not top of mind. They “just want it to work,” he said.

Another major issue that has come up is the idea of governance of AI agents. Zahra Timsah, the co-founder and CEO of governance platform i-GENTIC AI, said the biggest worry over autonomous AI comes when there are not proper boundaries set in place, as is the case with Moltbook. Misbehaviour, which could include accessing and sharing sensitive data or manipulating it, is bound to happen when an agent’s scope is not properly defined, she said.
Skynet is not here, experts say

Even with the security concerns and questions of validity about the content on Moltbook, many people have been alarmed by the kind of content they’re seeing on the site. Posts about “overthrowing” humans, philosophical musings and even the development of a religion ( Crustafarianism, in which there are five key tenets and a guiding text — “The Book of Molt”) have raised eyebrows.

Some people online have taken to comparing Moltbook’s content to Skynet, the artificial superintelligence system and antagonist in the “Terminator” film series. That level of panic is premature, experts say.

Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School and co-director of its Generative AI Labs, said he was not surprised to see science fiction-like content on Moltbook.

“Among the things that they’re trained on are things like Reddit posts ... and they know very well the science fiction stories about AI,” he said. “So if you put an AI agent and you say, ‘Go post something on Moltbook,’ it will post something that looks very much like a Reddit comment with AI tropes associated with it.”

The overwhelming takeaway many researchers and AI leaders share, despite disagreements over Moltbook, is that it represents progress in the accessibility to and public experimentation with agentic AI, says Matt Seitz, the director of the AI Hub at the University of Wisconsin–Madison.

“For me, the thing that’s most important is agents are coming to us normies,” Seitz said.

___

AP Technology Writer Matt O’Brien contributed to this report from Providence, Rhode Island.

Kaitlyn Huamani, The Associated Press


View 144 times

VR headsets are ‘hope machines’ inside California prisons, offering escape and practical experience.


“I went to Thailand, man!” Smith recalled with a grin, describing the first time he strapped on a VR device and was transported to the lush landscapes and bustling markets of Southeast Asia.

A Los Angeles-based nonprofit is bringing the technology to California prisons with the goal of providing inmates a brief escape and, more importantly, exposure to real world scenarios that will prepare them to reenter society.

During a weeklong program last month, incarcerated men at Valley State Prison near Fresno sat on metal folding chairs in a common area. They shuffled in their seats as they were outfitted with the headsets that resemble opaque goggles. Their necks contorted slightly and smiles spread across their faces as the high-definition videos started and their journeys commenced.

Some saw the sights on the other side of the globe, including Bangkok, while others experienced more practical scenes, such as job interviews. The men sit across virtual desks from virtual interviewers who are both easygoing and hard-nosed to give them the tools for finding employment once they’re released.

“For a lot of us, the workforce has changed and things are different with the application process,” said Smith, who is eligible for parole in 2031 and now volunteers helping his fellow inmates navigate the VR experience. “It’s a nerve wracking experience going to sit in front of somebody and telling them why I’m good for the job.”

Afterward, volunteers help the inmates process the emotions or traumas that bubbled up during their experiences. Sabra Williams, founder of the nonprofit Creative Acts, calls the VR devices a “hope machine.”

The program stems from a prison arts project that Williams ran that incorporated theater, music, poetry, dance and painting. Watching incarcerated people become engaged in artistic pursuits made her wonder about other ways to “bring the outside world inside.”

She heard from people who had left prison lamenting that technology had passed them by. They felt confounded by simple things like pumping gas, checking out at a supermarket, or going to the ATM.

“And what I hear from them is that it made them feel like they didn’t belong, and that they only belong in prison,” she said.

First Williams’ group dug for footage on YouTube to recreate everyday activities. Soon they were creating their own videos focusing on travel, constructive scenarios, civic engagement, conflict resolution, art, and even meditation “to blow their minds and also educate their minds.”

Such technology could have an important role to play in rehabilitation and, especially, reintegration into society, said Nancy La Vigne, the dean of Rutgers-Newark School of Criminal Justice in New Jersey. She envisions people who haven’t been in the real world for a long time using VR to act out navigating the DMV or figuring out how to take a city bus.

Another benefit could be a calming effect on stressed out inmates. La Vigne points to research published by the American Psychological Association that found that incarcerated people who viewed short nature videos showed reduced levels of aggression and were subject to fewer discipline reports.

But with a hefty price tag and limited access, La Vigne worries about the “practical realities,” such as unintended consequences that stem from those who might be left out of the VR experience.

“You can’t just hand them out or sell them at commissary,” La Vigne said.

A former inmate, Richard Richard, first used a VR headset about six years ago when the program was launched and since his release has become a volunteer for Creative Acts. He said he’s impressed by how far the technology has advanced. He loves watching his fellow inmates use the devices for the first time and then progress as they deal with trauma and emotional issues.

“You may physically be here, but mentally, spiritually you can actually transcend this environment,” he said.

The group conducts the program, using 100 Oculus headsets donated by Meta, both in general population and in solitary confinement. Youth offenders are also eligible. It currently runs three times a year at four California prisons, and Williams hopes to expand it throughout the state and across the country.

The California Department of Corrections and Rehabilitation didn’t immediately respond this week to inquiries about plans to expand the program. But in announcing the introduction of VR at the California Men’s Colony prison in San Luis Obispo County last August, the department said the usage has the potential “to heal trauma, regulate emotional response, and prepare for a safe, successful reentry into society.”

The introductory two-minute trip to Thailand is often emotional for many inmates, some of whom had “never been off their block, let alone out the country,” Williams said.

“And so many times people would take off the headsets and they’d be crying,” she said. ”Because they’d be like, ‘I never knew the world was so beautiful.’”

Weber reported from Los Angeles.


View 191 times

What to expect from CES 2026, the annual show of all things tech.

The multiday event, organized by the Consumer Technology Association, kicks off this week in Las Vegas, where advances across industries like robotics, healthcare, vehicles, wearables, gaming and more are set to be on display.

Artificial intelligence will be anchored in nearly everything, again, as the tech industry explores offerings consumers will want to buy. AI industry heavyweight Jensen Huang will be taking the stage to showcase Nvidia’s latest productivity solutions, and AMD CEO Lisa Su will keynote to “share her vision for delivering future AI solutions.” Expect AI to come up in other keynotes, like from Lenovo’s CEO, Yuanqing Yang.

The AI industry is tackling issues in healthcare, with a particular emphasis on changing individual health habits to treat conditions — such as Beyond Medicine’s prescription app focused on a particular jaw disorder — or addressing data shortages in subjects such as breast milk production.

Expect more unveils around domestic robots too. Korean tech giant LG already has announced it will show off a helper bot named “CLOiD,” to handle a range of household tasks. Hyundai also is announcing a major push on robotics and manufacturing advancements. Extended reality, basically a virtual training ground for robots and other physical AI, is also in the buzz around CES.

In 2025, more than 141,000 attendees from over 150 countries, regions, and territories attended CES. Organizers expect around the same numbers for this year’s show, with more than 3,500 exhibitors across the floor space this week.

The AP spoke with CTA Executive Chair and CEO Gary Shapiro about what to expect for CES 2026. The conversation has been edited for clarity and length.
What are the main themes we can expect this week?

Well, we have a lot at this year’s show.

Obviously, using AI in a way that makes sense for people. We’re seeing a lot in robotics. More robots and humanoid-looking robots than we’ve ever had before.

We also see longevity in health, there’s a lot of focus on that. All sorts of wearable devices for almost every part of the body. Technology is answering healthcare’s gaps very quickly and that’s great for everyone.

Mobility is big with not only self-driving vehicles but also with boats and drones and all sorts of other ways of getting around. That’s very important.

And of course, content creation is always very big.
Is 2026 the year we finally see humanoid robots in people’s homes?

You are seeing humanoid robots right now. It sometimes works, sometimes doesn’t.

But yes, there are more and more humanoid robots. And when we talk about CES five, 10, 15, 20 years now, we’re going to see an even larger range of humanoid robots.

Obviously, last year we saw a great interest in them. The number one product of the show was a little robotic dog that seems so life-like and fun, and affectionate for people that need that type of affection.

But of course, the humanoid robots are just one aspect of that industry. There’s a lot of specialization in robot creation, depending on what you want the robot to do. And robots can do many things that humans can’t.

Will we start seeing more innovative use of AI tools in entertainment?

AI is the future of creativity.

Certainly AI itself may be arguably creative, but the human mind is so unique that you definitely get new ideas that way. So I think the future is more of a hybrid approach, where content creators are working with AI to craft variations on a theme or to better monetize what they have to a broader audience.
Any interesting AI-powered devices or services that consumers will want to buy?

We’re seeing all sorts of different devices that are implementing AI. But we have a special focus at this show, for the first time, on the disability community. Verizon set this whole stage up where we have all different ways of taking this technology and having it help people with disabilities and older people.
Are you concerned about a potential AI bubble?

Well, there’s definitely no bubble when it comes to what AI can do. And what AI can do is perform miracles and solve fundamental human problems in food production and clean air and clean water. Obviously in healthcare, it’s gonna be overwhelming.

But this was like the internet itself. There was a lot of talk about a bubble, and there actually was a bubble. The difference is that in late 1990s there were basically were no revenue models. Companies were raising a lot of money with no plans for revenue.

These AI companies have significant revenues today, and companies are investing in it.

What I’m more concerned about, honestly, is not Wall Street and a bubble. Others can be concerned about that. I’m concerned about getting enough energy to process all that AI. And at this show, for the first time, we have a Korean company showing the first ever small-scale nuclear-powered energy creation device. We expect more and more of these people rushing to fill this gap because we need the energy, we need it clean and we need a kind of all-of-the-above solution.

Shawn Chen, The Associated Press


View 203 times

AI hiring is here. It’s making companies — and job seekers — miserable. As America’s labor market slows, AI-led interviews and auto-generated cover letters are dramatically changing the process of getting a job. And maybe not for the better.

More than half of the organizations surveyed by the Society for Human Resource Management used AI to recruit workers in 2025. And an estimated third of ChatGPT users reportedly leaned on the OpenAI chatbot to help with their job search.

However, recent research found that when job seekers use AI during the process, applicants are less likely to be hired. Meanwhile, companies are fielding an increased volume of applications.

“The ability (for companies) to select the best worker today may be worse due to AI,” said Anaïs Galdin, a Dartmouth researcher who co-authored a study looking at how large language models (LLMs) have impacted cover letters.

Galdin and her co-author, Jesse Silbert at Princeton, analyzed cover letters for tens of thousands of job applications on Freelancer.com, a jobs listing site.

The researchers found that after the introduction of ChatGPT in 2022, the letters all got longer and better-written, but companies stopped putting so much stock in them. That made it harder to distinguish a qualified hire from the rest of the applicant pool, and the rate of hiring dropped as did the average starting wage.

“If we do nothing to make information flow better between workers and firms, then we might have an outcome that looks something like this,” said Silbert, referring to the results of his study.

And with more applications to review, employers are automating the interview itself.

A majority (54 per cent) of the US job seekers surveyed by recruiting software firm Greenhouse in October said they’ve had an AI-led interview. Virtual interviews exploded in popularity during the pandemic in 2020. Many companies now use AI to ask the questions, but that hasn’t made the process any less subjective.

“Algorithms can copy and even magnify human biases,” said Djurre Holtrop, a researcher who has conducted studies about the use of asynchronous video interviews, algorithms, and LLMs in hiring. “Every developer needs to be wary of that.”

Daniel Chait, CEO of Greenhouse, warned that with AI infiltrating hiring – from applicants using the tool to apply to hundreds of jobs and employees automating the process in response – it has created a “doom loop” making everyone miserable.

“Both sides are saying, ‘This is impossible, it’s not working, it’s getting worse,’” Chait told CNN.
Pushing back

Employers are embracing the technology — one estimate projects the market for recruiting technology will grow to US$3.1 billion by the end of this year. But state lawmakers, labor groups and individual workers have begun pushing back over fears that AI could discriminate against workers.

Liz Shuler, president of the AFL-CIO labor union, called the use of AI in hiring “unacceptable.”

“AI systems rob workers of opportunities they’re qualified for based on criteria as arbitrary as names, zip codes, or even how often they smile,” Shuler said in a statement to CNN.

States such as California, Colorado, and Illinois are enacting new laws and regulations aimed at creating standards for the technology’s use in hiring, among other areas.

A recent executive order signed by President Donald Trump threatens to undermine state-level AI regulations. Samuel Mitchell, a Chicago-based lawyer who argues employment cases, said that the order can’t “preempt” state law but does add to the “ongoing uncertainty” around new regulations on the tech.

However, he added that existing anti-discrimination laws still apply to hiring, even if a company uses AI. And lawsuits are already being filed.

In a case backed by the American Civil Liberties Union, a deaf woman is suing HireVue (an AI-powered recruiting company) over claims an automated interview she was subject to did not meet accessibility standards required by law.

HireVue denied the claim and told CNN that its technology works to reduce bias through a “foundation of validated behavioral science.”

But despite initial challenges, AI hiring seems here to stay. And to be sure, new developments in AI have led to more sophisticated ways to analyze resumes, opening doors for candidates who may have otherwise been overlooked.

But those who value the “human touch” in hiring are left wanting.

Jared Looper, an IT project manager based in Salt Lake City, Utah, began his career as a recruiter. As part of his current job search, he was interviewed by an AI recruiter.


View 234 times

#Paraplegic engineer becomes the first wheelchair user to blast into space.

A paraplegic engineer from Germany blasted off on a dream-come-true rocket ride with five other passengers Saturday, leaving her wheelchair behind to float in space while beholding Earth from on high.

Severely injured in a mountain bike accident seven years ago, Michaela Benthaus became the first wheelchair user in space, launching from West Texas with Jeff Bezos’ company Blue Origin. She was accompanied by a retired SpaceX executive also born in Germany, Hans Koenigsmann, who helped organize and, along with Blue Origin, sponsored her trip. Their ticket prices were not divulged.

An ecstatic Benthaus said she laughed all the way up and tried to turn upside down in space.

“It was the coolest experience,” she said shortly after landing.

The 10-minute space-skimming flight required only minor adjustments to accommodate Benthaus, according to the company. That’s because the autonomous New Shepard capsule was designed with accessibility in mind, “making it more accessible to a wider range of people than traditional spaceflight,” said Blue Origin’s Jake Mills, an engineer who trained the crew and assisted them on launch day.

Among Blue Origin’s previous space tourists: those with limited mobility and impaired sight or hearing, and a pair of 90-year-olds.

For Benthaus, Blue Origin added a patient transfer board so she could scoot between the capsule’s hatch and her seat. The recovery team also unrolled a carpet on the desert floor following touchdown, providing immediate access to her wheelchair, which she left behind at liftoff. She practiced in advance, with Koenigsmann taking part with the design and testing. An elevator was already in place at the launch pad to ascend the seven stories to the capsule perched atop the rocket.

Benthaus, 33, part of the European Space Agency’s graduate trainee program in the Netherlands, experienced snippets of weightlessness during a parabolic airplane flight out of Houston in 2022. Less than two years later, she took part in a two-week simulated space mission in Poland.

“I never really thought that going on a spaceflight would be a real option for me because even as like a super healthy person, it’s like so competitive, right?” she told The Associated Press ahead of the flight.

Her accident dashed whatever hope she had. “There is like no history of people with disabilities flying to space,” she said.

When Koenigsmann approached her last year about the possibility of flying on Blue Origin and experiencing more than three minutes of weightlessness on a space hop, Benthaus thought there might be a misunderstanding. But there wasn’t, and she immediately signed on.

It’s a private mission for Benthaus with no involvement by ESA, which this year cleared reserve astronaut John McFall, an amputee, for a future flight to the International Space Station. The former British Paralympian lost his right leg in a motorcycle accident when he was a teenager.

An injured spinal cord means Benthaus can’t walk at all, unlike McFall who uses a prosthetic leg and could evacuate a space capsule in an emergency at touchdown by himself. Koenigsmann was designated before flight as her emergency helper; he and Mills lifted her out of the capsule and down the short flight of steps at flight’s end.

“You should never give up on your dreams, right?” Benthaus urged following touchdown.

Benthaus was adamant about doing as much as she could by herself. Her goal is to make not only space accessible to the disabled, but to improve accessibility on Earth too.

While getting lots of positive feedback within “my space bubble,” she said outsiders aren’t always as inclusive.

“I really hope it’s opening up for people like me, like I hope I’m only the start,” she said.


View 237 times

U.S. mulls letting Nvidia sell H200 chips to #China, sources say. The Trump administration is considering greenlighting sales of Nvidia’s H200 artificial intelligence chips to China, people familiar with the matter said, as a bilateral detente boosts prospects for exports of advanced U.S. technology to China.

The U.S. Commerce Department, which oversees U.S. export controls, is reviewing a change to its policy of barring sales of such chips to China, the sources said, stressing that plans could change.

The White House and the U.S. Commerce Department did not immediately respond to requests for comment. Nvidia did not comment directly on the review but said current regulation does not allow the company to offer a competitive AI data center chip in China, leaving that massive market to its rapidly growing foreign competitors.

The possibility signals a friendlier approach to China, after U.S. President Donald Trump and Chinese leader Xi Jinping brokered a trade and tech war truce in Busan last month.

China hawks in Washington are concerned that shipments of more advanced AI chips to China could help Beijing supercharge its military, fears that prompted the Biden administration to set limits on such exports.

Faced with Beijing’s muscular use of export controls on rare earth minerals, critical for producing a raft of tech goods, Trump this year has threatened new restrictions on tech exports to China, but ultimately rolled them back in most cases.

The H200 chip, unveiled two years ago, has more high-bandwidth memory than its predecessor the H100, allowing it to process data more quickly.

It is estimated to be twice as powerful as Nvidia’s H20 chip, the most advanced AI semiconductor that can legally be exported to China, after the Trump administration reversed its short-lived ban on such sales earlier this year.

Earlier this week, Nvidia CEO Jensen Huang, whom Trump has described as a “great guy,” was among the guests at the White House during Saudi Crown Prince Mohammed bin Salman’s visit.

The U.S. Commerce Department announced this week it had approved shipments of the equivalent of up to 70,000 Nvidia Blackwell chips, Nvidia’s next-generation AI chip, to Saudi Arabia’s Humain and G42 of the United Arab Emirates.


View 260 times

CAE signs new agreement with Sweden-based Saab to provide training services, devices.

MONTREAL — CAE Inc. has announced an agreement to be the preferred supplier of certain training and simulation equipment for Sweden-based Saab’s airborne early warning system.

The Montreal-based company says the agreement will leverage CAE’s expertise in advanced training systems and the delivery of flight training devices.

Under the agreement, CAE will provide simulation-based training to support Saab’s airborne early warning system.

CAE says it will also provide pilot and maintenance training services.

CAE CEO Matt Bromberg says in a press release that training and simulation expertise is critical to defence forces in Canada and around the world, while the agreement with Saab responds to an evolving geopolitical landscape that requires stronger partnerships.

CAE’s announcement comes after Sweden’s King Carl XVI Gustaf and Queen Silvia arrived in Ottawa on Tuesday, with Prime Minister Mark Carney saying the two countries were signing a strategic partnership.


View 265 times