First-of-its-kind medical robot is helping doctors perform spinal surgeries in Delaware.

PHILADELPHIA, Pennsylvania (KYW) - A first-of-its-kind medical robot is helping doctors at Nemours Children’s Hospital perform spinal surgeries.

Doctors at Nemours said this robot is making surgery faster and more precise, and it means a quicker and easier recovery for patients.

Rhiannon Groff, 16, is recovering from a new kind of spine surgery performed by Dr. Brett Shannon at Nemours.

The 11th grader has scoliosis, a curvature of the spine.

“I would have a lot of soreness and aching, especially in my lower back,” she said.

To relieve the pain and straighten her back, surgeons place rods like these that are held in place with a series of screws.

“It’s important to make sure that they’re placed exactly in the bone rather than outside into the lung or to the blood vessels or to the nerve roots,” said Shannon.

Doctors said the robot makes spine surgery faster and more accurate. Nemours is the first facility on the East Coast to have this new spinal robot.

“This elevates us to another generation of being able to see what is unseen beneath the surface and understand the three-dimensional geometry much better,” said Dr. Suken Shah of Nemours.

The robot is equipped with imaging to pinpoint the location of the screws. It can also assist in getting them precisely inserted, not touching nearby vital structures, just millimeters away.

Groff said there’s no more pain two months after the surgery.

“When I first heard that it was gonna be assisted by a robot, I honestly thought it was really cool,” she said. “And I’m so glad that it’s helping people like me and people with more serious conditions recover and get better.”

Groff is already stretching, ready to head back to running track pain-free.

The team at Nemours said they’re using the new robot to assist in a variety of spinal surgeries.

By Stephanie Stahl, Nate Sylves.


View 166 times

San Francisco billboard challenge puts AI engineers to the test. SAN FRANCISCO (KPIX) -- On a quiet San Francisco street, a plain white billboard seemed to appear out of nowhere. No logo, no tagline, just five strings of numbers. Was it an ad? An art project? Or something else entirely?

“It was a moment of desperation,” he said.

Alfred Wahlforss, cofounder and CEO of a small startup called Listen Labs, had a big problem: how to compete for artificial intelligence engineers against Silicon Valley giants.

“We’re hiring over 100 people over the next few months and there are empty spots, but we can’t fill them because Mark Zuckerberg is giving US$100 million offers to the best engineers,” he said.

So they did something off the wall, spending a fifth of their marketing budget, about $5,000, on a billboard.

To most, it looked like gibberish. To the right eyes, a coding challenge. Solve it and you land on a website and face the real test, build an algorithm to act as a digital bouncer at Berghain, the Berlin nightclub famed for its nearly impossible door policy.

Quirky, sure? But for Listen Labs, the bouncer challenge was a metaphor for their own work, using AI to decide who gets interviewed for market research and who doesn’t.

They expected a few engineers might notice. Then someone posted it online and the puzzle went viral.

“Were you surprised by the reaction? It was wild,” he said.

Within days, thousands took a shot. 430 have cracked it, among them Alex Nicita, a software consultant from New York.

“It was very fun to go through, solve the challenge and reach the top of the leader board,” he said.

Now he’s in the interview round, and yes, some of these code breakers have already been hired.

In the end, 60 people made the cut, including the winner who scored a night at Berghain, all expenses paid.

“It’s a reminder to take risks and do something unique and different and extra extraordinary things happen,” Wahlforss said.


View 168 times

US$2,000 for a phone? Apple says yes.

Back in 2017, consumers balked at the idea of a US$1,000 iPhone. Now, some shoppers may end up paying double that if they choose Apple’s latest top-of-the-line model.

The iPhone 17 Pro Max, the larger variant of Apple’s Pro phone that launches today along with the iPhone 17, 17 Pro and iPhone Air, costs $2,000 if buyers choose the version with two terabytes (2TB) of storage. Phones with extra storage typically cost more, but this is the first time Apple has released a 2TB option for the iPhone, making it one of the most expensive phones on the market.

The launch comes as Apple faces mounting pressure to boost iPhone sales amid concerns about its AI strategy. Offering a more expensive price tier for the iPhone allows Apple to generate more revenue without selling more units during what has been, until recently, a rocky smartphone market. Some analysts say consumers, tightening their purse strings because of inflation and tariffs, have been cutting back spending on smartphones.

Apple’s new iPhones should modestly raise the product’s average selling price, or the average price iPhones are sold at, said Angelo Zino, senior vice president at investment research firm CFRA. That’s a metric analysts monitor as a sign of how lucrative the iPhone is for Apple.

But Zino largely expects the price boost to be driven by demand for the iPhone Air, which is $100 more expensive than last year’s iPhone 16 Plus. The 2TB storage option is a means for Apple to differentiate its high-end phones from the competition, he says. Samsung’s Galaxy S25 Ultra and Google’s Pixel 10 Pro XL top out at 1TB of storage.

“I think it’s an interesting kind of offering, in the sense that I don’t believe there’s another phone out there that offers a two terabyte internal storage,” he said.

The $2,000 iPhone may be Apple’s most expensive phone yet, but it’s not quite as pricey as foldable phones from Samsung and Google with 1TB of storage. The 1TB version of Samsung’s Galaxy Z Fold 7 costs $2,419.99, while the Pixel 10 Pro Fold with the same amount of storage is priced at $2,149.

Many buyers aren’t likely to immediately face that price tag since carriers typically offer trade-in deals and payment plans to soften the blow. Fifty-five percent of phone shoppers in the United States — including those who purchase flip phones and basic mobile phones — buy their device through an installment plan, according to Consumer Intelligence Research Partners.

And it’s shoppers looking for less expensive phones, not premium devices like the iPhone 17 Pro Max, that are more likely to hold off on upgrading.

“Economic uncertainty tends to compress demand at the lower end of the market, where price sensitivity is highest,” the International Data Corporation’s Nabila Popal wrote in a report earlier this month.

Apple’s Pro iPhones tend to sell better than the standard entry-level models, particularly in the United States, according to CFRA’s Zino and Josh Lowitz, an analyst for Consumer Intelligence Research Partners.

The increased storage may also be another sign that Apple is marketing its “pro” iPhones towards content creators and video editors. Large multimedia files typically take up more space, and the Pro models also include support for tools used to sync video across multiple cameras. Apple said it filmed its September launch event on an iPhone 17 Pro.

Lowitz noted that 2TB is considered a lot of storage even for a laptop. Those who purchase 2TB laptops typically work in fields that require saving a lot of large files locally, like graphic designers who need to preserve hundreds of different design versions for projects.

“Other than people with extraordinary video usage, two terabytes is just … it’s a crazy amount of storage,” he said.

Apple announced its new iPhone lineup on September 9 on its campus in Cupertino, California, ahead of the Friday launch. The Pro models include a redesigned back panel and a camera with a longer zoom, along with extended battery life. The iPhones also have better performance due to Apple’s new chip and an updated design that allows for better heat dissipation, Apple claims.

Wedbush Securities analyst Dan Ives predicts iPhone preorders will increase five to 10 per cent compared to last year since the firm estimates 20 per cent of global iPhone owners haven’t upgraded in the past four years.

Article written by Lisa Eadicicco, CNN


View 155 times

‘It’s a game changer’: New implant helps stabilize blood pressure in patients with spinal cord injuries.


Research led by teams at several universities worldwide, including the University of Calgary, shows that a new implantable system on patients’ spines can restore blood pressure balance after a spinal cord injury.

The findings, published in both Nature and Nature Medicine, describe a targeted therapy to address blood pressure regulation in 14 patients across four clinical studies conducted at three medical centres in Canada, Switzerland, and the Netherlands.

“Blood pressure is a profound issue after a spinal cord injury, both highs and lows,” said Aaron Phillips, an associate dean at the University of Calgary Cumming School of Medicine, who is involved in the research.

“This is because the spinal cord is disconnected from the brain, which is responsible for controlling blood pressure.”

Phillips explains that low blood pressure can lead to fainting and reduced energy, while high blood pressure can increase the risk of a stroke.

“We also know that this blood pressure instability that happens after a spinal cord injury can lead to cardiovascular disease over the long term,” he said.

One of the 14 participants in the clinical trial is Cody Krebs from Alberta.

The 32-year-old says a semi-truck blew through a stop sign and t-boned his vehicle in 2022.

“That resulted in me breaking my neck — C6 and C7 level,” said Krebs.

Krebs is now in a wheelchair and the damage to his spinal cord means he can’t regulate his blood pressure.

“I get really lightheaded and dizzy, and my ears can start ringing,” he said.

As part of the trial, Krebs had one of the implantable systems surgically placed in his spine.

“This is a device that is implanted around the spinal cord. It delivers a low electrical current to basically replace the signal the brain would normally give to control blood pressure,” said Dr. Fady Girgis, a neurosurgeon and associate professor at the University of Calgary.

“Surgery, of course, carries some risk, but these devices have been around for a long time and are generally very safe. They can prevent patients from needing to take blood pressure medications.”

The electrical currents in Krebs’ device are controlled externally with a remote. A new prototype has been developed that allows the currents to be delivered without a remote.

Krebs says his quality of life has improved immensely with the implant.

“It’s helped me get back into work because I’m not exhausted all the time,” said Krebs.

“If I have it on throughout the day, I find I can spend more time in the evenings without being tired and having to go to bed early.”

The company that developed the implantable neurostimulation system that were used in the studies has received FDA approval to initiate a pivotal trial of the therapy. The trial is expected to involve about 20 neurorehabilitation and neurosurgical research centers across Canada, #Europe and the United States.


View 152 times

Mark Zuckerberg unveils Meta’s newest AI-powered smart glasses, Meta CEO Mark Zuckerberg took the stage on Wednesday to unveil the company’s next-generation artificial intelligence-powered wearable device: a pair of smart glasses with a tiny display inside the lens.

The Meta Ray-Bans Display glasses represent the company’s next step toward a future where we all spend less time looking down at a phone screen. Rather, we could interact with Meta’s AI technology — as well as our messages, photos and the rest of our online lives — via glasses not totally unlike regular prescription lenses or sunglasses.

The Displays and other new wearables are part of the company’s bid to make its AI technology a bigger part of users’ everyday lives as it competes with other big industry players to create the most advanced and widely used models.

“Glasses are the ideal form factor for personal super intelligence because they let you stay present in the moment while getting access to all of these AI capabilities to make you smarter, help you communicate better, improve your memory, improve your senses,” Zuckerberg said.

Zuckerberg announced the new products during the keynote at Meta’s annual Connect event — where it outlines new AI, virtual and augmented reality and wearable technologies — from its Menlo Park, California, headquarters on Wednesday. Meta also showed off the latest version of its more basic Ray-Ban smart glasses, the Gen 2 and new sport glasses, the Meta Oakley Vanguard; and new experiences on its Quest 3 virtual reality headsets, including games and a new entertainment app and partnership with Disney+ for Horizon, Meta’s immersive “metaverse” experience.

Smart glasses remain a relatively niche product, but consumer adoption is growing fast. Meta’s partner, Ray-Ban parent EssilorLuxottica, said in July that revenue from its Meta glasses more than tripled year-over-year. And the company is seeking to produce 10 million pairs of Meta glasses each year starting in 2026.

The Meta Ray-Ban Display glasses are key to reaching that goal, EssilorLuxottica’s Chief Wearables Officer Rocco Basilico said in an interview with CNN.

“You can wear the glasses and feel good in your favorite brands, but if you actually need, like, some super-powers, some immediate information, that could be delivered through audio or through the display,” Basilico said, calling the new display offering “the biggest launch that we have done so far.”

Zuckerberg said on Wednesday that the sales trajectory for Meta’s smart glasses are “similar to some of the most popular consumer electronics of all time.”

While Meta was early to making smart glasses that consumers actually want to buy, it faces growing competition from Google, Samsung, Snap and potentially Amazon, raising the stakes for the new technology it’s rolling out starting Wednesday.

Here’s more on the new display smart glasses and everything the company announced at Meta Connect:
Meta Ray-Ban Display

Meta has long described its Ray-Bans as smart glasses that can “see what you see and hear what you hear.” Now, with the Meta Ray-Bans Display, users will also have some visual feedback that makes it possible to interact with the device in new ways.

The Displays feature a tiny display screen on the inside right corner of the right lens. The display looks as though it’s projected several feet in front of the user’s surrounding environment.

That display makes it possible to do a variety of things one might previously have done on their phone screen: view and send messages, capture and review photos and videos, watch Instagram Reels and take video calls where users will see the person on the other end.

“We have been working on glasses for more than 10 years, and this is one of those special moments where we get to show you something that we poured a lot of our lives into,” Zuckerberg said, “and that I just think is different from anything that I’ve seen.”

There’s also a navigation feature that shows where a user is on a map in real-time, so they could walk somewhere without staring at a maps app on their phone. And live captioning and translation lets users see what a conversation partner is saying in real-time. (Captioned conversations will also save as a transcript in the Meta AI app — a feature journalists at least will find very useful!)

Users can also ask the Meta AI assistant questions and it will respond with information panels on the display, in addition to giving an audio answer.

Whereas users could interact with previous versions of Meta Ray-Bans using only voice commands, the Meta Ray-Ban Displays work with a “neural” wristband that lets users navigate the display using subtle hand gestures. Tapping your thumb and index finger, for example, acts as a select function to press play on music.

Only the wearer can see the display screen. That’s by design to protect the privacy of the user’s messages, photos and other content. But it could also lead to some awkward moments if people don’t realize Meta Ray-Ban Display users are actually reading incoming texts in the middle of a conversation.

However, users can turn the display screen off when they’re not using it — and use the spectacles just like regular glasses.

“When we’re designing the hardware and software, we focus on giving you access to very powerful tools when you want them, and then just having them fade into the background” when you’re not, Zuckerberg said.

Meta isn’t the first to try to make a device like this. Google launched an early version of glasses with a display in the lens called Google Glass in 2013, but flopped with consumers. But the technology to power smart glasses — like processors, batteries and cameras — has improved (and shrunk) significantly over the past decade. The Displays look and feel just slightly heavier and chunkier than non-techy glasses.

The Displays work for six hours on a single charge, with the case providing up to 30 hours of additional power. The wristband has 18 hours of battery life and is water-resistant.

The Meta Ray-Ban Displays will be available starting on September 30 for US$799, at limited brick-and-mortar retail stores in the United States, including some Verizon, Lens Crafter, Ray-Ban and Best Buy locations.

Ran-Ban Meta Gen 2

The Ray-Ban Meta Gen 2, priced at US$379, looks similar to predecessors but has updated colours, battery life and camera. The battery life has doubled to eight hours, and the charging case provides an additional 48 hours of power.

The glasses can now also capture higher-quality 3K video. In updates set for later this fall, they’ll be able to take slow-motion and hyperlapse videos, too.

Meta says a new, opt-in “conversation focus” feature will make it easier for Meta Ray-Ban wearers to hear someone during an in-person conversation, even in a loud area. The tool uses the glasses’ “open-ear speakers to amplify” the other person amid background noise.

An imperfect live demo at Connect served as a reminder that despite the advancements, this technology is still in its fairly early stages. The company attempted to show a chef using Meta AI on the Gen 2s to get audio directions to follow a recipe. When the assistant failed to generate an intelligible response, Zuckerberg blamed it on the Wi-Fi.

Similarly, in a demo of the Meta Ray-Ban Displays, Zuckerberg struggled to answer a video call from Chief Technology Officer Andrew Bosworth because the button to accept the call didn’t show up on the display. “We’ll debug that later,” Bosworth said.

Meta Oakley Vanguard Sports Glasses

The new Meta Oakley Vanguard are smart glasses designed for sports and outdoor activities and cost US$499.

They pair with platforms Strava and Garmin to let users track their workouts. The Meta AI app will have a new “workouts” section to show activity details, photos and videos captured with the smart glasses, and an AI summary of each workout.

The Vanguard boasts bigger, louder speakers — so users can still hear their music on a windy bike ride, for example — and has the longest battery life of any Meta glasses, around nine hours. The Vanguard’s control buttons are on the bottom edge of the arm of the glasses, whereas other Meta Ray-Bans feature a top capture button, so users can still access them if they’re wearing a helmet.

The Vanguard glasses are also water- and dust-resistant. And the camera, which is centered on the bridge of the nose rather than on the side, has a wider field of view compared to Meta’s other glasses and can capture 3K video.

“I’ve taken them out surfing,” Zuckerberg said. “It’s fine, it’s good.”

CNN received a demo of the Vanguards paired with a Garmin watch. On a treadmill walk, Meta AI on the Vanguards could answer questions, for example, about current heart rate and the length of the exercise.


View 152 times

#Canadian researchers create tool to remove anti-deepfake watermarks from AI content.

#OTTAWA — University of Waterloo researchers have built a tool that can quickly remove watermarks identifying content as artificially generated — and they say it proves that global efforts to combat deepfakes are most likely on the wrong track.

Academia and industry have focused on watermarking as the best way to fight deepfakes and “basically abandoned all other approaches,” said Andre Kassis, a PhD candidate in computer science who led the research.

At a White House event in 2023, the leading AI companies — including OpenAI, Meta, Google and Amazon — pledged to implement mechanisms such as watermarking to clearly identify AI-generated content.

AI companies’ systems embed a watermark, which is a hidden signature or pattern that isn’t visible to a person but can be identified by another system, Kassis explained.

He said the research shows the use of watermarks is most likely not a viable shield against the hazards posed by AI content.

“It tells us that the danger of deepfakes is something that we don’t even have the tools to start tackling at this point,” he said.

The tool developed at the University of Waterloo, called UnMarker, follows other academic research on removing watermarks. That includes work at the University of Maryland, a collaboration between researchers at the University of California and Carnegie Mellon, and work at ETH Zürich.

Kassis said his research goes further than earlier efforts and is the “first to expose a systemic vulnerability that undermines the very premise of watermarking as a defence against deepfakes.”

In a follow-up email statement, he said that “what sets UnMarker apart is that it requires no knowledge of the watermarking algorithm, no access to internal parameters, and no interaction with the detector at all.”

When tested, the tool worked more than 50 per cent of the time on different AI models, a university press release said.

AI systems can be misused to create deepfakes, spread misinformation and perpetrate scams — creating a need for a reliable way to identify content as AI-generated, Kassis said.

After AI tools became too advanced for AI detectors to work well, attention turned to watermarking.

The idea is that if we cannot “post facto understand or detect what’s real and what’s not,” it’s possible to inject “some kind of hidden signature or some kind of hidden pattern” earlier on, when the content is created, Kassis said.

The European Union’s AI Act requires providers of systems that put out large quantities of synthetic content to implement techniques and methods to make AI-generated or manipulated content identifiable, such as watermarks.

In Canada, a voluntary code of conduct launched by the federal government in 2023 requires those behind AI systems to develop and implement “a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking).”

Kassis said UnMarker can remove watermarks without knowing anything about the system that generated it, or anything about the watermark itself.

“We can just apply this tool and within two minutes max, it will output an image that is visually identical to the watermark image” which can then be distributed, he said.

“It kind of is ironic that there’s billions that are being poured into this technology and then, just with two buttons that you press, you can just get an image that is watermark-free.”

Kassis said that while the major AI players are racing to implement watermarking technology, more effort should be put into finding alternative solutions.

Watermarks have “been declared as the de facto standard for future defence against these systems,” he said.

“I guess it’s a call for everyone to take a step back and then try to think about this problem again.”

This report by The Canadian Press was first published July 23, 2025.


View 169 times