IEEE Spectrum
spectrum.ieee.org.web.brid.gy
IEEE Spectrum
@spectrum.ieee.org.web.brid.gy
The world's leading engineering magazine

[bridged from https://spectrum.ieee.org/ on the web: https://fed.brid.gy/web/spectrum.ieee.org ]
Could Terahertz Radar in Cars Save Lives?
A few years ago, Matthew Carey lost a friend in a freak car accident, after the friend’s car struck some small debris on a highway and spun out of control. Ordinarily, the car’s sensors would have detected the debris in plenty of time, but it was operating under conditions that render all of today’s car-mounted sensors useless: fog and bright early-morning sunshine. Radar can’t see small objects well, lidar is limited by fog, and cameras are blinded by glare. Carey and his cofounders decided to create a sensor that could have done the job—a terahertz imager. Historically, terahertz frequencies have been the least utilized portion of the electromagnetic spectrum. People have struggled to send them even short distances through the air. But thanks to some intense engineering and improvements in silicon transistor frequency, beaming terahertz radiation over hundreds of meters is now possible. Teradar, the Boston-based startup Carey cofounded, has managed to make a sensor that can meet the auto industry’s 300-meter distance requirements. The company came out of stealth last week with chips it says can deliver 20 times the resolution of automotive radar while seeing through all kinds of weather and costing less than lidar. The tech provides “a superset of lidar and radar combined,” Carey says. The technology is in tests with carmakers for a slot in vehicles to be produced in 2028, he says. It would be the first such sensor to make it to market. “Every time you unlock a chunk of the electromagnetic spectrum, you unlock a brand-new way to view the world,” Carey says. ## Terahertz imaging for cars Teradar’s system is a new architecture, says Carey, that has elements of traditional radar and a camera. The terahertz transmitters are arrays of elements that generate electronically steerable beams, while the sensors are like imaging chips in a camera. The beams scan the area, and the sensor measures the time it takes for the signals to return as well as where they return from. Teradar’s system can steer beams of terahertz radiation with no moving parts.Teradar From these signals, the system generates a point cloud, similar to what a lidar produces. But unlike lidar, it does not use any moving parts. Those moving parts add significantly to the cost of lidar and subject it to wear and tear from the road. “It’s a sensor that [has] the simplicity of radar and the resolution of lidar,” says Carey. Whether it replaces either technology or becomes an add-on is up to carmakers, he adds. The company is currently working with five of them. ## Terahertz transistors and circuits**** That Teradar has gotten this far is partly down to progress in silicon transistor technology—in particular, the steady increase in the maximum frequency of devices that modern foundries can supply, says Carey. Ruonan Han, a professor of electrical engineering at MIT who specializes in terahertz electronics, agrees. These improvements have led to boosts in the efficiency of terahertz circuits, their output power, and the sensitivity of receivers. Additionally, chip packaging, which is key to efficiently transmitting the radiation, has improved. Combined with research into the design of circuits and systems, engineers can now apply terahertz radiation in a variety of applications, including autonomous driving and safety. Nevertheless, “it’s pretty challenging to deliver the performance needed for real and safe self-driving—especially the distance,” says Han. His lab at MIT has worked on terahertz radar and other circuits for several years. At the moment it’s focused on developing lightweight, low-power terahertz sensors for robots and drones. His lab has also spun out an imaging startup, Cambridge Terahertz, targeted at using the frequency band’s advantages in security scanners, where it can see through clothes to spot hidden weapons. Teradar, too, will explore applications outside the automotive sector. Carey points out that while terahertz frequencies do not penetrate skin, melanomas show up as a different color at those wavelengths compared to normal skin. But for now Carey’s company is focused on cars. And in that area, there’s one question I had to ask: Could Teradar’s tech have saved Kit Kat, the feline regrettably run down by a Waymo self-driving car in San Francisco last month? “It probably would have saved the cat,” says Carey.
spectrum.ieee.org
November 20, 2025 at 7:46 PM
Narrowing focus can increase productivity
_This article is crossposted from_IEEE Spectrum _’s careers newsletter._Sign up now_ _to get insider tips, expert advice, and practical strategies,__written i _n partnership with tech career development companyTaro and ___delivered to your inbox for free!__ ____The most productive engineer I worked with at Meta joined the company as a staff engineer. This is already a relatively senior position, but he then proceeded to earn two promotions within three years, becoming one of the most senior engineers in the entire company. Interestingly, what made him so productive was also frequently a source of annoyance for many of his colleagues. Productivity comes from prioritization, and that meant he often said no to ideas and opportunities that he didn’t think were important. He frequently rejected projects that didn’t align with his priorities. He was laser-focused every day on the top project that the organization needed to deliver. He would skip status meetings, tech debt initiatives, and team bonding events. When he was in focus mode, he was difficult to get in touch with. Compared to his relentless focus, I realized that most of what I spent my time on didn’t actually matter. I thought that having a to-do list of 10 items meant I was being productive. He ended up accomplishing a lot more than me with a list of two items, even if that meant he may have occasionally been a painful collaborator. This is what the vast majority of engineers misunderstand about productivity. **The biggest productivity “hack” is to simply work on the right things**. Figure out what’s important and strip away everything else from your day so that you can make methodical progress on that. In many workplaces, this is surprisingly difficult, and you’ll find your calendar filled with team lunches, maintenance requests, and leadership reviews. Do an audit of your day and examine how you spend your time. As an engineer, if the majority of your day is spent in emails and coordinating across teams, you’re clearly not being as productive as you could be. My colleague got promoted so quickly because of his prodigious output. That output comes from whittling down the number of priorities rather than expanding them. It’s far better to deliver fully on the key priority, rather than getting pulled in every direction and subsequently failing to deliver anything of value. —Rahul ## This Professor’s Open-Source Robots Make STEM More Inclusive Carlotta Berry is an electrical and computer engineering professor focused on bringing low-cost mobile robots to the public so that anyone can learn about robotics. She demonstrates open-source robots of her own design at schools, libraries, museums, and other community venues. Learn how her work earned her an Undergraduate Teaching Award from the IEEE Robotics and Automation Society. Read more here. ## Scientists Need a Positive Vision for AI We should not resign ourselves to the story of AI making experiences worse, say Bruce Schneier and Nathan E. Sanders at Harvard University. Rather, scientists and engineers should recognize the ways in which AI can be used for good. They suggest reforming AI under ethical guidelines, documenting negative applications of AI, using AI responsibly, and preparing institutions for the impacts of AI. Read more here. ## Should You Use AI to Apply for Jobs? Many job seekers are now using AI during the application process. This trend has led to a deluge of AI-generated resumes and cover letters many recruiters now must sift through, but when used thoughtfully, AI can help applicants find a match in an increasingly difficult job market. __The Chronicle of Higher Education__ shares some dos and don’ts of using AI to apply for jobs. Read more here.
spectrum.ieee.org
November 20, 2025 at 8:19 AM
This IBM Engineer Is Pushing Quantum Computing Out of the Lab
Genya Crossman is a lifelong learner passionate about helping people understand and use quantum computing to solve the world’s most complex problems. So, she is excited that quantum computing is in the spotlight this year. UNESCO declared 2025 the International Year of Quantum Science and Technology. It’s also the 100th anniversary of physicist Werner Heisenberg’s “On the Quantum-Theoretical Reinterpretation of Kinematic and Mechanical Relationships,” the first published paper on quantum mechanics. Crossman, an IEEE member, is a quantum strategy consultant at IBM in Germany. As a full-time staff member, she coordinates and manages five working groups focused on developing quantum-based solutions for near-term problems in health care and life sciences, materials science, high-energy physics, optimization, and sustainability. ### Genya Crossman **Employer** ****IBM in Germany **Job title** ****Quantum strategy consultant **Member grade** ****Member **Alma maters** University of Massachusetts, Amherst; Delft University of Technology and the Technische Universität Berlin She attended the sixth annual IEEE Quantum Week, held from 31 August to 5 September in Albuquerque. This year’s event, also known as the IEEE International Conference on Quantum Computing and Engineering, marked the first time that the IBM- and community-created working groups’ experts and collaborators publicly presented their research together. “We got great feedback and information about identifying common features across groups,” Crossman says. “The audience got to hear real-life examples to understand how quantum computing applies to different scenarios and how it works.” Crossman understands the importance of sharing research more than most because she works at the intersection of quantum computing research and practical application. The quantum field might seem intimidating, she says, but you don’t need to understand it to use a quantum computer. “Anyone can use one,” she says. “And if you know programming languages like Python, you can code a quantum computer.” ## The basics of quantum computing IBM has a long-standing history with quantum computing. IEEE Member Charles H. Bennett, an IBM Fellow, is called the father of quantum information theory because he wrote the first notes on the subject in 1970. In May 1981, IBM and MIT held the first Physics of Computation Conference. “Quantum computing is often used to describe all quantum work,” including quantum science and quantum technology, Crossman says. The field involves a variety of technologies, including sensors, meteorology, and communications. Classical computers use bits, and quantum computers use quantum bits, called __qubits__. Qubits can exist in more than one state simultaneously (both one and zero), known as the ability to exist in “superposition.” Computers using qubits can store and process highly complex information and data faster and more efficiently, possibly using significantly less energy than classical computers. With so much power and processing ability, quantum computers are complex and still not fully understood. Engineers are working to make quantum computing more accessible to everyone, so more people can understand how to work with the technology, Crossman says. ## Inspired by her father and IEEE Growing up in the North Shore of Boston, Crossman spent many summer mornings poring over the latest issues of __IEEE Spectrum__ and __Scientific American__ with her older sister. Her father, Antony Crossman, is an electrical and electronics engineer and an IEEE life member. He often discussed science and engineering concepts with his daughters. Looking back, Crossman says, she sees reading __Spectrum__ as her first introduction to how research is presented. “I loved reading about new research and what could be done with it,” she says. “It helped point me toward engineering as a career.” When she enrolled at McGill University in Montreal in 2011 to pursue a bachelor’s degree in physics, her father gifted her an IEEE student membership. “Montreal is a beautiful, creative city that’s also relatively easy to travel to from Boston within a day,” she says. “Plus, the school was known for its physics program.” After two years, she dropped out and moved to Paris, where she worked in a café. A year later, in 2014, she enrolled in the physics degree program at the University of Massachusetts, Amherst. In the summer of 2016, Crossman’s undergraduate advisor, Professor Stéphane Willocq, recommended her for a research project in the Microsystems Technology Laboratory within MIT’s electrical engineering department. “Quantum computing is often used to describe all quantum work, including quantum computing, quantum science, and quantum technology.” “I had been conducting research” with Willocq, she says, “and he knew I was considering going into electrical engineering, so he suggested I apply for this summer research opportunity.” As a research assistant, she examined carrier transport in transistors and diodes made with two-dimensional materials. After graduating with a bachelor’s degree in physics in 2017, she initially planned to go straight to graduate school, she says, but she wasn’t sure what she wanted to focus on. A friend and former classmate from an undergraduate quantum mechanics course referred her to a quantum computing job opening at Rigetti Computing in Berkeley, Calif. She was hired as a junior quantum engineer. She started by creating the predecessor to, and then the schema for, the company’s first device database. She then designed, modeled, and simulated quantum devices such as circuits for superconducting quantum computers, including some used in the first deployed quantum systems. She also managed the Berkeley fabrication facility. In that role, she learned a great deal about electrical and microwave engineering, she says, and that introduced her to computational modeling. It led her to better understand practical applications of quantum computing, she says. Her newfound knowledge made her “want to learn why and how people use quantum technology,” she says, which is how she became interested in the end users’ needs. To further her career, she left Rigetti in 2020 and moved to Germany to pursue a dual master’s degree in computational and applied mathematics through a joint program between the Delft University of Technology and the Technische Universität Berlin. When she first began her master’s program, IBM recruiters offered her two jobs, she says, but she declined because she wanted to finish her degree. During her studies, she worked with her mentor Eliska Greplova, an associate professor at TU Delft, who invited Crossman to join her quantum matter and AI research group. Crossman learned about condensed matter, machine learning, and quantum learning, and she participated in discussions about the technologies’ implications. Despite being a great experience, it ultimately led her to decide against pursuing a Ph.D., she says, because she enjoyed working in the industry and that’s where she wanted to be in the long run. She had planned to focus her master’s thesis on quantum computing from the end user’s perspective, but she switched to writing about integrating topological properties onto superconducting hardware. She graduated in 2022. In January 2023, she accepted a full-time position at IBM Research in Germany as a quantum strategy consultant, supporting enterprise clients. Since then, her job has changed to technical engagement lead, overseeing the five quantum working groups. She is also part of the team that oversees the company’s responsible computing initiative. IBM defines responsible quantum computing as the type that’s “aware of its effects.” The company says it wants to ensure it develops and uses quantum computing in line with its principles. Established in 2022 by IBM and researchers from other organizations, the working groups tackle near-term problems and look for quantum and interdisciplinary solutions in their area of focus, Crossman says. The groups are community-driven, with researchers from both quantum and nonquantum backgrounds collaborating to identify key problems, decide what to pursue, and pool their expertise to fill gaps, allowing them to look at problems holistically, she says. The groups regularly publish papers and make them publicly available. Crossman’s job is to support the researchers, locate resources, help them use the IBM ecosystem, and identify experts to answer niche questions. Her other focus is on the end users, the people who will employ the research emerging from the working groups. She says she seeks to understand their needs and how to best support them. “I really enjoy quantum engineering and working with everyone because it’s such an interdisciplinary field,” she says. “It combines problem-solving with creativity. It’s really at an exciting stage of development.” With so much momentum, Crossman says, she is eager to see where quantum technologies go next. “When I started learning about quantum mechanics in undergrad, there wasn’t much information out there,” she says. “The beginning of my career was when the quantum computing industry was just getting started. I’m really grateful for that.” ## Staying current on research Being an IEEE member allows Crossman to stay updated on research across multiple fields, she says, and that’s important because most of them “are becoming much more interdisciplinary, especially quantum computing.” She says she is looking forward to collaborating more with IEEE members working on quantum computing. “I’ve always found IEEE useful,” she says. “I can learn about new research in my and other fields, and I really enjoyed attending this year’s Quantum Week.”
spectrum.ieee.org
November 20, 2025 at 8:19 AM
A New Axial-Flux Motor Becomes a Supercar Staple
Tesla was first to patent a primitive axial-flux electric motor— _Nikola_ Tesla, that is, way back in 1889. It would be 126 years before the concept found its way to a car, the 1,500-horsepower (1,103 kilowatt), US $1.9 million, Koenigsegg Regera hybrid, in 2015. Even today, nearly all the world’s EVs and hybrids rely on relatively inefficient, easy-to-manufacture _radial_ -flux motors. Yet the latest electrified revolution is underway, led by YASA. Founded in the U.K. by Tim Woolmer in 2009, a spin-off from his Oxford PhD project, the company’s pioneering axial-flux motors are powering hybrid supercars from a Who’s Who of makers: Ferrari, Lamborghini, McLaren and Koenigsegg. Those include the Ferrari 296 Speciale and Lamborghini Temerario that I recently drove in Italy. Boosted by these power-dense electric machines, these racy Italians carved up roads in Emilia-Romagna like hunks of prosciutto di Parma. The Temerario’s gasoline V-8 revs to a stratospheric 10,000 rpm, higher than any production supercar. Still not enough: The Temerario also integrates three YASA motors. A pair on the front axle deliver all-wheel-drive traction and a peak 294 horsepower (216 kilowatts). A total of 907 hybrid horsepower (667 kilowatts) sends the Temerario to a blistering 343 kph (213 mph) top speed. The electric motors ably fill any gaps in gasoline acceleration and finesse the handling with torque-vectoring, the electrified front wheels helping to catapult the Lamborghini out of corners with ridiculous ease. With their compact design and superior power-to-weight ratio, these motors are setting records on land, sea and air. The world’s fastest electric plane, the Rolls-Royce Spirit of Innovation, integrated three YASA motors for its propeller, sending it to a record 559.9 kph (345.4 mph) top speed. Applying tech from its Formula E racing program, Jaguar used YASA motors to set a maritime electric speed record of 142.6 kph (88 mph) in England’s Lake District in 2018 (that record has since been broken). ## Claimed Power Density is Three Times Tesla’s Best In August, YASA’s motors helped the Mercedes-AMG GT XX prototype set dozens of EV endurance records. Cruising around Italy’s Nardo circuit at a sustained 186 mph (300 kph), the roughly 1,000-kilowatt (1,360 horsepower) Mercedes EV drove about 5,300 kilometers per day. In 7.5 days, it traveled 40,075 kilometers (24,902 miles), the exact equivalent of the earth’s circumference. That time included stops for charging, at 850 kilowatts. Mercedes F1 driver George Russell stands next to a Mercedes AMG GT XX during its record-setting endurance run this past August. Powered by three YASA axial-flux motors, the concept EV drove the equivalent of the earth’s circumference in 7.5 days, at a near-steady 300 kph. A production version of the car could be a competitor for the Porsche Taycan. Mercedes-Benz Mercedes bought YASA outright in 2021. Daimler, Mercedes’ corporate parent, is retrofitting a factory in Berlin to build up to 100,000 YASA motors a year, for the next logical step: The motors will power mass-produced EVs for the first time, specifically from AMG, Mercedes’ formidable high-performance division. The company recently unveiled its latest motor, and its stats are eye-opening: The axial-flux prototype generates a peak 750 kilowatts, or 1,005 horsepower, as tested on a dynamometer. The motor can output a continuous 350-400 kilowatts (469-536 horsepower). Yet the unit weighs just 12.7 kilograms (27.9 pounds). Woolmer says the resulting power density of 59 kilowatts per kilogram is an unofficial world record for an electric motor, and about three times that of leading radial-flux designs, including Tesla’s. ### “And this isn’t a concept on a screen — it’s running, right now, on the dynos,” Woolmer says. “We’ve built an electric motor that’s significantly more power-dense than anything before it, all with scalable materials and processes.” Simon Odling, YASA’s chief of new technology, walks me through the tech. Conventional, radial-flux motors are shaped like a sausage roll. A spinning rotor is housed within a stationary stator. The lines of magnetic flux are oriented radially, perpendicular to the motor’s central shaft. These flux lines represent the interacting magnetic fields of the permanent magnets in the rotor and electromagnets in the stator. It is that interaction that provides torque. An axial flux design is more like a pancake. In YASA’s configuration, a pair of much-larger rotors are positioned on either side of the stator, and, notably, all three have roughly the same diameter. Magnetic flux is oriented axially, parallel to the shaft. Because torque is proportional to the rotor diameter squared, an axial-flux design can generate substantially more torque than a comparable radial-flux unit. The dual permanent-magnetic rotors double the key torque-generating components, and ensure a short magnetic path, which enhances efficiency by reducing losses in the magnetic field. YASA R&D Engineer Eddie Martin holds 12.7 kg axial-flux motor that cranks out 750 kilowatts/1,005 horsepower.YASA Odling says the company’s motors are about one-third the mass and length of a comparable radial-flux machine, with intriguing upsides for vehicle packaging and weight savings. “The motor sits between an engine and gearbox really nicely in a hybrid application, or it makes for a very compact drive unit in an EV,” Odling says. The configuration is also ideal for in-wheel motors, because the flat shape fits easily within the width of car and even motorcycle wheels. YASA also touts the weight savings. Cascading gains in vehicle architecture could eliminate at least 200 kilograms from today’s EVs, the company says, about half from the motors themselves, the rest from smaller batteries, brakes, and lighter-weight supporting structures. ## YASA’s Secret Sauce is a Soft Magnetic Composite The company’s name offers another clue to its technical edge: YASA stands for “Yokeless and Segmented Architecture.” The motors remove a heavy iron or steel yoke, the structural and magnetic backbone for the copper coils in a conventional stator. Instead, they use a Soft Magnetic Composite (SMC)—a material that has very high magnetic permeability. That characteristic means the material is a very effective conductor of magnetic flux, so it can be used to concentrate and direct the field in the motor. In a typical application, the stator’s coils are wrapped around structures made of SMC. Woolmer began studying SMCs in the mid 2000s before there were potential paying customers for his nascent motor designs: The first Tesla Roadster didn’t hit the road until 2008, and suppliers and tooling for these motors didn’t exist then. Woolmer’s early axial-flux designs finally made their way into the Jaguar C-X75 in 2010, a concept that was cancelled prior to production. By 2019, Ferrari was integrating one of Woolmer’s motors in its first hybrid, the SF90. SMC became a key innovation, because axial-flux motors couldn’t be manufactured with the stacked-steel laminations of radial-flux machines. Woolmer segmented the stator into individual “pole pieces” made from SMC, which can be formed under pressure into a huge variety of 3D shapes. That flexibility greatly reduces weight and eddy-current losses, and lessens the cooling burden. Where a conventional motor might have 30 kilograms of iron, a comparable YASA design would need only 5 kilograms to generate the same power and torque. YASA’s stators also integrate flat copper windings with direct oil cooling, Odling says, with no “buried copper” that the oil can’t reach. That greatly improves thermal performance and recovery in stressful conditions, a potential boon for high-performance EVs. YASA designs and develops its motors at its Oxford Innovation Center. In May, it opened a new axial-motor “super factory” in nearby Yarnton, with capacity for more than 25,000 motors each year. The company also credits the British Advanced Propulsion Center (APC) as a linchpin of its expansion. The collaboration between the U.K. government, industry and academia looks to accelerate homegrown development of zero-emissions transportation to meet Net Zero targets. YASA intends to release more specifics on its latest prototype motor in December. But company executives say the motor is ready for customers, with no exotic materials or manufacturing techniques required.
spectrum.ieee.org
November 20, 2025 at 8:19 AM
Why Is Everyone’s Robot Folding Clothes?
It seems like every week there’s a new video of a robot folding clothes. We’ve had some fantastic demonstrations, like this semi-autonomous video from Weave Robotics on X. It’s awesome stuff, but Weave is far from the only company producing these kinds of videos. Figure 02 is folding clothes. Figure 03 is folding clothes. Physical Intelligence launched their flagship vision-language-action model, pi0, with an amazing video of a robot folding clothes after unloading a laundry machine. You can see robots folding clothes live at robotics expos. Even before all this, Google showed clothes folding in their work, ALOHA unleashed. 7X Tech is even planning to sell robots to fold clothes! And besides folding actual clothes, there are other clothes-folding-like tasks, like Dyna’s napkin folding— which leads to what is probably my top robot video of the year, demonstrating 18 hours of continuous napkin folding. So why are all of these robotic manipulation companies suddenly into folding? ## Reason 1: We basically couldn’t do this before There’s work going back over a decade that shows some amount of robotic clothes folding. But these demonstrations were extremely brittle, extremely slow, and not even remotely production ready. Previous solutions existed (even learning based solutions!) but they relied on precise camera calibration, or on carefully hand-designed features, meaning that these clothes folding demos generally worked only on one robot, in one environment, and may have only ever worked a single time—just enough for the recording of a demo video or paper submission. With a little bit of help from a creatively patterned shirt, PR2 was folding things back in 2014.Bosch/IEEE Take a look at this example of UC Berkeley’s PR2 folding laundry from 2014. This robot is, in fact, using a neural network policy. But that policy is very small and brittle; it picks and places objects against the same green background, moves very slowly, and can’t handle a wide range of shirts. Making this work in practice would require larger models, pretrained on web-scale data, and better, more general techniques for imitation learning. And so 10 years later, with the appropriate demonstration data, many different startups and research labs have been able to implement clothes-folding demos; it’s something we have seen from numerous hobbyists and startups, using broadly similar tools (like LeRobot from HuggingFace), without intense specialization. ## Reason 2: It looks great and people want it! Many of us who work in robotics have this ‘north star’ of a robot butler which can do all the chores we don’t want to do. Mention clothes folding, and many, many people will chime in about how they don’t ever want to fold clothes again and are ready to part with basically any amount of money to make that happen. This is important for the companies involved as well. Companies like Figure and 1x have been raising large amounts of money predicated on the idea that they will be able to automate many different jobs, but increasingly these companies seem to want to start in the home. Dyna Robotics can fold an indefinite number of napkins indefinitely.Dyna Robotics And that’s part of the magic of these demos. While they’re slow, and imperfect, everyone can start to envision how this technology becomes the thing that we all want: a robot that can exist in our house and mitigate all those everyday annoyances that take up our time. ## Reason 3: It avoids what robots are still bad at These robot behaviors are produced by models trained via imitation learning. Modern imitation learning methods like Diffusion Policy use techniques inspired by generative AI to produce complex, dexterous robot trajectories, based on examples of expert human behavior that’s been provided to them—and they often need many, many trajectories. The work ALOHA Unleashed by Google is a great example, needing about 6,000 demonstrations to learn how to, for example, tie a pair of shoelaces. For each of these demonstrations, a human piloted a pair of robot arms while performing the task; all of this data was then used to train a policy. We need to keep in mind what’s hard about these demonstrations. Human demonstrations are never __perfect__ , nor are they perfectly consistent; for example, two human demonstrators will never grab the __exact__ same part of an object with sub-millimeter precision. That’s potentially a problem if you want to screw a cover in place on top of a machine you’re building, but it’s not a problem at all for folding clothes, which is fairly forgiving. This has two knock-on effects: * It’s easier to collect the demonstrations you need for folding clothes, as you don’t need to throw out every training demonstration that’s a millimeter out of spec. * You can use cheaper, less repeatable hardware to accomplish the same task, which is useful if you suddenly need a fleet of robots collecting thousands of demos, or if you’re a small team with limited funding! For similar reasons, it’s great that with cloth folding, you can fix your cameras in just the right position. When learning a new skill, you need training examples with “coverage” of the space of environments you expect to see at deployment time. So the more control you have, the more efficient the learning process will be—the less data you’ll need, and the easier it will be to get a flashy demo. Keep this in mind when you see a robot folding things on a plain tabletop or with an extremely clean background; that’s not just nice framing, it helps the robot out a lot! And since we’ve committed to collecting a ton of data—dozens of hours—to get this task working well, mistakes will be made. It’s very useful, then, if it’s easy to __reset__ the task, i.e. restore it to a state from which you can try the task again. If something goes wrong folding clothes, it’s fine. Simply pick the cloth up, drop it, and it’s ready for you to start over. This wouldn’t work if, say, you were stacking glasses to put away in a cupboard, since if you knock over the stack or drop one on the floor, you’re in trouble. Clothes folding also avoids making forceful contact with the environment. Once you’re exerting a lot of pressure, things can break, the task can become non-resettable, and demonstrations are often much harder to collect because forces aren’t as easily observable to the policy. And every piece of variation (like the amount of force you’re exerting) will end up requiring more data so the model has “coverage” of the space it’s expected to operate in. ## What To Look Forward To While we’re seeing a lot of clothes-folding demos now, I still feel, broadly, quite impressed with many of them. As mentioned above, Dyna was one of my favorite demos this year, mostly because longer-running robot policies have been so rare until now. But they were able to demonstrate zero-shot folding (meaning folding without additional training data) at a couple of different conferences, including Actuate in San Francisco and the Conference on Robot Learning (CoRL) in Seoul. This is impressive and actually very rare in robotics, even now. In the future, we should hope to see robots that can handle more challenging and dynamic interactions with their environments: moving more quickly, moving heavier objects, and climbing or otherwise handling adverse terrain while performing manipulation tasks. But for now, remember that modern learning methods will come with their own strengths and weaknesses. It seems that, while not easy, clothes folding is the kind of task that’s just really well suited for what our models can do right now. So expect to see a lot more of it.
spectrum.ieee.org
November 20, 2025 at 8:20 AM
Amazon Pilots New Pedal-Assist Electric Delivery Vehicle
Amazon is piloting a new four-wheel, pedal-assist electric delivery vehicle built by Also, a spin-off from electric-truck maker Rivian, in a bid to make city logistics cleaner and more efficient. The vehicle—called the TM-Q—combines the stability and cargo capacity of a small van with the compact footprint of an e-bike. Amazon plans to deploy the TM-Q in several major cities as part of its broader strategy to decarbonize last-mile delivery. The TM-Q aims to solve one of the toughest challenges in urban logistics: moving heavy loads quickly through crowded city centers where trucks are inefficient and often unwelcome. Designed to slip through traffic and park in tight spaces, the vehicle lets couriers pedal with electric assist or switch to full battery power on steep hills. The TM-Q also cuts down on the emissions and noise that have made traditional vans a target of new low-emission-zone regulations across Europe and the United States. The project marks the first large-scale deployment of pedal-assist “micro-vans” by a global logistics company—a new middle ground between cargo bikes and delivery vans. ## Rivian’s Micromobility Spin-off Also began as an internal Rivian project to explore how the company’s EV expertise could extend into micromobility. It became an independent company in early 2025 with US $105 million in Series A funding from Eclipse Ventures. Rivian retains a minority stake, and founder RJ Scaringe sits on Also’s board. “Everything we learned from the Electric Delivery Van (EDV) program was poured into this project,” Scaringe said at the launch event in Oakland, Calif. That program allows Amazon to manage both Rivian’s electric vans and Also’s new quads through a shared fleet-management system—a logistical advantage for a company already operating more than 25,000 Rivian EDVs worldwide. All of Also’s hardware and software are built in-house, using lessons from Rivian’s vehicle architecture but with a separate supply chain, leadership team, and technology stack. The TM-Q’s pedal-by-wire powertrain merges human input with the same kind of safety-tested control logic found in full-size electric vehicles—just scaled down for a bike. Torque and cadence sensors at the crank measure how hard and how fast the rider pedals. Those signals feed a controller that, within milliseconds, determines how much electric power to add from the rear-hub motor. The harder the rider pushes, the more assist the system provides—up to legally defined limits (250 watts continuous in the EU and higher in the U.S.). “We’re applying car-level engineering to machines that move through city bike lanes.” **–Chris Yu, Also** Because the drivetrain is fully electronic, Also can tune the assist ratio through software updates—a practice borrowed from Rivian’s EV tuning. The system also applies regenerative braking, recovering small amounts of energy to recharge the battery when slowing or stopping. Power comes from a modular, swappable lithium-ion pack, light enough to be carried by hand. Standard packs offer roughly 538 watt-hours for up to112 kilometers (70 miles) of range, while larger 808 Wh packs extend that to 160 km (100 miles). Both versions support regenerative braking, which adds about 25 percent to effective range. Charging uses USB-C PD 3.1—the latest major version of the USB Power Delivery standard. The updated charging modality allows for faster, more efficient energy transfer (in this case, at up to 240 W), with communication between the charger and the battery meant to prevent issues such as overheating and thermal runaway that trigger fires. Also says the power packs are designed for bidirectional energy flow, meaning they can double as portable power banks. Also is developing battery-dock swap stations so couriers can replace depleted packs in minutes instead of waiting for recharging—key to keeping the e-bikes in motion during multi-shift operations. The TM-Q includes a lockable cargo enclosure, logistics and charging-management software, and a 12.7-centimeter (5-inch) touchscreen for navigation and diagnostics. Amazon plans to service the quads through its network of more than 70 micromobility hubs across North America and Europe. ## Challenges in Scaling Pedal-Assist Quads Four-wheeled pedal-assist vehicles have been attempted before, but scaling them has proven difficult. In the European Union, pedal-assist quads under 250 watts often qualify as bicycles, but in the United States, they fall into a gray zone between e-bikes and light electric vehicles. Each state sets different speed and power limits, complicating large-scale deployment. Also’s approach is to adapt each regional configuration to fit local micromobility laws. The company’s two main platforms illustrate this flexibility: the TM-Q, the quad built to provide near–cargo van capacity but optimized for bike-lane compliance, and the two-wheeled TM-B, a consumer-focused pedal-assist e-bike with a top assisted speed of 45 km/h (28 mph). The TM-B has open seating for passengers or groceries and can be configured with other body styles built on the same chassis. The company says both the TM-Q and TM-B are tested to automotive-grade reliability and safety standards, exceeding traditional e-bike durability benchmarks. Frames and electrical systems undergo vibration, impact, and water-resistance testing equivalent to that used to certify full-size EVs. Also president Chris Yu says the company is not just building bikes with motors. “We’re applying car-level engineering to machines that move through city bike lanes.” The Amazon–Also partnership reflects a broader industry shift toward “right-sizing” delivery fleets—deploying the smallest, most efficient vehicle for each route. Cities including London, New York, and Paris are tightening restrictions on van access and idling, making compact electric vehicles not just environmentally beneficial but also regulatory necessities. Also claims its TM series vehicles are 10 to 50 times as energy-efficient as local car or SUV trips, supporting cities’ emissions-reduction goals without requiring new road infrastructure. Initial production of the TM-Q is scheduled for early 2026, with final assembly in Taiwan, home to much of the world’s high-end e-bike manufacturing. Several key components, including frames and subassemblies, are produced in the United States, says Also. If the TM-Q delivers on its promise, city streets could soon trade four tons of steel for four wheels and a set of pedals—reshaping last-mile delivery one bike lane at a time.
spectrum.ieee.org
November 18, 2025 at 7:39 PM
The First Ticket Pre-Purchasing System Was Created 65 Years Ago
For Japan’s train commuters in the years following World War II, buying a ticket could be a stressful experience. Today it’s not difficult to go online and reserve a seat, but 65 years ago, travelers faced long queues at the ticket window and limited ways to find out if seats were available. Reservations were handwritten in a paper ledger, and there were plenty of accidental double-bookings. Travelers had no real way to know if they’d have—or could get—a reservation once they reached a ticket window. All that changed in 1960, when the Japanese National Railways (JNR), which operated the country’s system, partnered with technology company Hitachi to introduce the world’s first fully automated railway booking system: the Magnetic-electronic Automatic Reservation System-1. MARS-1 gave JNR the ability to reserve up to 3,600 seats per day for travelers across four routes between Tokyo and Osaka. Bookings could be accepted up to 15 days in advance. Passengers no longer had to gamble on availability, because reservations were confirmed in seconds. Riders traveling in groups could even reserve seats next to each other, ensuring families could stay together during the journey. The system has been commemorated as an IEEE Milestone for its role in transforming railway ticketing in Japan, and even in other countries. As of press time, the dedication ceremony was being planned. ## Introducing computers to Japan After the end of World War II, Japan’s economy began to recover relatively quickly, thanks to sweeping economic reforms that led to an industrial boom by the mid-1950s. Thanks in part to its economic growth, Japan invested heavily in its rail infrastructure, making trains more efficient and convenient for daily commuters and for long-distance travelers. As ridership soared, the inefficiency of the country’s railroad ticketing system quickly became apparent. JNR’s research institute took on the task of finding a solution. One of its engineers, Mamoru Hosaka, was already studying how computers could help automate certain tasks. Hosaka received the 2006 IEEE Computer Society Pioneer Award for his work on what later became MARS-1. In 1954 he successfully persuaded his colleagues and company executives to green-light a study into using computers to control railway systems, according to his Computer Society biography. Three years later, he shifted his focus and formed a team to investigate developing an automated reservation system using magnetic drum memory with a Bendix G-15 computer. Widely used in the 1950s and 1960s, magnetic drum memory stored information on the outside of a rotating cylinder coated with magnetic iron. “The technical achievements of MARS-1 and its successors reached well beyond the railway. They were foundational to the development of real-time transaction systems that shape modern life.” Hosaka and his team designed a prototype system composed of custom control circuits that could rapidly retrieve and update seat information for four new express trains that connected Tokyo and Osaka. For each reservation, the system verified seat availability, issued confirmation slips, and updated the records—all within seconds. The design was handed off to engineers at Hitachi in Tokyo, who developed a working system—MARS-1—two years later. It was first installed in 1960 at Tokyo Station and was one of the earliest large-scale deployments of a computerized system that captured, processed, and stored routine business transactions in real time. Streamlining reservations made rail service more efficient and reliable, which was critical for workers, students, and families traveling between growing cities. ## Scaling up for the bullet train Although the launch of MARS-1 was regarded as a major success, the system quickly revealed its limitations. By 1964, Japan was preparing to launch the world’s first high-speed rail line—the Shinkansen—(another IEEE Milestone)—known as the bullet train. The Shinkansen would reduce the travel time from Tokyo to Osaka from nearly seven hours to a little more than three. With the capacity for more trips per day, MARS-1’s initial throughput of 3,600 daily bookings could no longer meet demand. By October 1965, the upgraded MARS-102 system was installed in 152 stations across Japan. It consisted of three computers. The first searched trains, schedules, fares, and other tables. The second searched for and booked vacant seats. The third, and main computer, managed and controlled the system’s overall processing sequence. The computers exchanged data using a shared magnetic core memory unit, according to the Information Processing Society of Japan’s Computer Museum website. The MARS-102 could process up to 150,000 seats, about five times more than the previous system. Engineers continued to make upgrades, and by 1991, the system supported daily sales of more than 1 million tickets. ## Inspiring reservation systems worldwide MARS-1’s influence extended far beyond Japan. The system pioneered many of the principles that later underpinned global reservation systems. Sabre, developed by American Airlines in the early 1960s, used similar real-time transaction concepts for airline reservations. MARS-1 also paved the way for transaction-processing computers found in e-commerce, banking, and stock exchanges. Banks adopted comparable architectures for their ATM networks. Hotel chains developed automated room-booking platforms to process thousands of simultaneous transactions. “The technical achievements of MARS-1 and its successors reached well beyond the railway,” the Milestone proposers wrote. “They were foundational to the development of real-time transaction systems that shape modern life.” ## Recognition as an IEEE Milestone A plaque recognizing the MARS-1 as an IEEE Milestone is to be installed at the Railway Technical Research Institute, in Tokyo. It will read: __In 1960 Japanese National Railways introduced Magnetic-electronic Automatic Reservation System-1 (MARS-1), the first automated railway booking system. Initially supporting real-time reservations of 3,600 seats per day and up to 15 days in advance, it later adopted a task-sharing multicomputer architecture that could support additional routes including the bullet train in 1965. Continually upgraded, it supported daily sales of more than 1 million tickets by 1991 and reshaped worldwide rail ticketing.__ Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE Tokyo Section sponsored the nomination.
spectrum.ieee.org
November 17, 2025 at 10:06 PM
Microfluidics Could Be the Answer to Cooling AI Chips
Data center rack density has risen rapidly in recent years. Operators are cramming more computing power into each server rack to meet the needs of AI and other high-performance computing applications. That means that each rack needs more kilowatts of energy, and ultimately generates more heat. Cooling infrastructure has struggled to keep pace. “Rack densities have gone from an average of 6 kilowatts per rack eight years ago to the point where racks are now shipping with 270 kW,” says David Holmes, the global industries CTO at Dell Technologies. “Next year, 480 kW will be ready, and megawatt racks will be with us within two years.” Corintis, a Swiss company, is developing a technology called microfluidics, in which water or another cooling liquid**** is channeled directly to specific parts of a chip to prevent overheating. In a recent test with Microsoft, servers running the company’s Teams video conferencing software recorded heat removal rates three times as efficient as other existing cooling methods.Compared to traditional air cooling, microfluidics lowered chip temperatures by more than 80 percent. ## Boosting Chip Performance with Microfluidics A lower chip temperature allows the chip to execute instructions more quickly, increasing its performance. Chips operating at lower temperatures are also more energy efficient and have lower failure rates. Further, the temperature of the air used for cooling can be increased, making the data center more energy efficient by reducing the need for chillers, and lowering liquid consumption. The amount of water needed to cool a chip can be lowered considerably by targeting the liquid’s flow to the locations on the chip that are generating the most heat. Van Erp noted that the current industry standard is approximately 1.5 liters per minute per kilowatt of power. As chips are nearing10 kW, this will soon mean 15 liters per minute to cool one chip—a number that will raise the ire of communities worried about the impact of any supersized “AI factories” planned for their areas that could contain a million or more GPUs. “We need optimized chip-specific liquid cooling, to make sure every droplet of liquid goes to the right location,” says Remco van Erp, the co-founder and CEO of Corintis. Corintis’ founders Sam Harrison [left] and Remco van Erp hold a cold plate and microfluidic core, respectively.Corintis The simulation and optimization software developed by Corintis is used to design a network of microscopically small channels on cold plates. Just like arteries, veins, and capillaries in the circulatory system of the body, the ideal cold plate design for each type of chip is a complex network of precisely shaped channels. Corintis has scaled up its additive manufacturing capabilities to be able to mass-produce copper parts with channels as narrow as a human hair, about 70 micrometers. Its cold plate technology is compatible with today’s liquid cooling systems. The company believes this approach can improve cold plate results by at least 25 percent. By working with chip manufacturers directly to carve channels into the silicon itself, Corintis believes tenfold gains in cooling can eventually be realized. ## Advancing Liquid Cooling for AI Chips Liquid cooling is far from new. The IBM 360 mainframe, for example, was cooled by water more than half a century ago. Modern day liquid cooling has largely been a contest between immersion systems—in which racks and sometimes entire rows of equipment are submerged in a cooling fluid—and direct-to-chip systems—in which a cooling fluid is channeled to a cold plate placed against a chip. Immersion cooling is not yet ready for prime time. And while direct-to-chip cooling is being widely deployed to keep GPUs cool, it only cools around the surface of the chip. “Liquid cooling in today’s form is a one-size-fits-all solution, relying on simplistic designs that are not adapted to the chip, which prevents good heat transfer,” says van Erp. “The optimal design for each chip is a complex network of precisely shaped micro-scale channels that are adapted to the chip to guide coolant to the most critical regions.” Corintis is already working with chip manufacturers on improved designs. Chip manufacturers are using the company’s thermal emulation platform to program heat dissipation on silicon test chips with millimeter-scale resolution, and then sense the resulting temperature on the chip after the selected cooling method is installed. In other words, Corintis acts as the bridge between chip design and cooling system design, enabling chip designers to build future chips for AI applications with superior thermal performance. The next stage is to go from being a bridge between cooling channel and chip design to unification of those two processes. “Modern chips and cooling are currently two discrete elements with the interface between the two being one of the main bottlenecks for heat transfer,” says van Erp. To improve cooling performance by tenfold, Corintis is betting on a future where cooling is tightly coupled as an integral part of the chip itself—the microfluidic cooling channels will be etched directly inside the microprocessor package rather than on cold plates on the perimeter. Corintis has produced more than 10,000 copper cold plates, and is ramping its manufacturing capabilities to reach a million cold plates by the end of 2026. It has also developed a prototype line in Switzerland where it is developing cooling channels directly within chips rather than onto a cold plate. This is only planned for small quantities to demonstrate basic concepts that will then be turned over to chip makers and cold plate manufacturers. Corintis announced these expansion plans immediately following the publication of the Microsoft Teams tests. In addition, it is opening U.S. offices to serve U.S. customers and an Engineering office in Munich, Germany. In addition, the company also announced the completion of a US $24 million Series A funding round led by BlueYard Capital and other investors.
spectrum.ieee.org
November 17, 2025 at 10:06 PM
Your Laptop Isn’t Ready for LLMs. That’s About to Change
**Odds are the PC in** your office today isn’t ready to run AI large language models (LLMs). Today, most users interact with LLMs via an online, browser-based interface. The more technically inclined might use an application programming interface or command line interface. In either case, the queries are sent to a data center, where the model is hosted and run. It works well, until it doesn’t; a data-center outage can take a model offline for hours. Plus, some users might be unwilling to send personal data to an anonymous entity. Running a model locally on your computer could offer significant benefits: lower latency, better understanding of your personal needs, and the privacy that comes with keeping your data on your own machine. However, for the average laptop that’s over a year old, the number of useful AI models you can run locally on your PC is close to zero. This laptop might have a four- to eight-core processor (CPU), no dedicated graphics chip (GPU) or neural-processing unit (NPU), and 16 gigabytes of RAM, leaving it underpowered for LLMs. Even new, high-end PC laptops, which often include an NPU and a GPU, can struggle. The largest AI models have over a trillion parameters, which requires memory in the hundreds of gigabytes. Smaller versions of these models are available, even prolific, but they often lack the intelligence of larger models, which only dedicated AI data centers can handle. The situation is even worse when other AI features aimed at making the model more capable are considered. Small language models (SLMs) that run on local hardware either scale back these features or omit them entirely. Image and video generation are difficult to run locally on laptops, too, and until recently they were reserved for high-end tower desktop PCs. That’s a problem for AI adoption. To make running AI models locally possible, the hardware found inside laptops and the software that runs on it will need an upgrade. This is the beginning of a shift in laptop design that will give engineers the opportunity to abandon the last vestiges of the past and reinvent the PC from the ground up. ## NPUs enter the chat The most obvious way to boost a PC’s AI performance is to place a powerful NPU alongside the CPU. An NPU is a specialized chip designed for the matrix multiplication calculations that most AI models rely on. These matrix operations are highly parallelized, which is why GPUs (which were already better at highly parallelized tasks than CPUs) became the go-to option for AI data centers. However, because NPUs are designed specifically to handle these matrix operations—and not other tasks, like 3D graphics—they’re more power efficient than GPUs. That’s important for accelerating AI on portable consumer technology. NPUs also tend to provide better support for low-precision arithmetic than laptop GPUs. AI models often use low-precision arithmetic to reduce computational and memory needs on portable hardware, such as laptops. ### Laptops Are Being Rebuilt to Run LLMs 1. **Addition of NPUs.** Neural processing units (NPUs)—specialized accelerator chips that can run large language models (LLMs) and other AI models faster than CPUs and GPUs can—are being incorporated into laptops. 2. **Addition of more—and faster—memory.** The largest language models take up hundreds of gigabytes of memory. To host these models, and serve them quickly to the number-crunching processing units, laptops are increasing their memory capacity and speed. 3. **Consolidation of memory.** Most laptops today have a divided memory architecture, with a separate pool of memory to serve the GPUs. This made sense when the design first came out: GPUs needed faster memory access than could be supplied by the common bus. Now, to feed AI’s data appetite, laptop architects are rethinking this decision, and now they’re pooling memory together with faster interconnects. 4. **Combination of chips on the same silicon.** To help shorten the path to pooled memory, all the processing units—CPUs, GPUs, and NPUs—are now being integrated into the same silicon chip. This helps them connect to one another and to memory, but it will make maintenance more challenging. 5. **Power management.** AI models can see heavy use when they power always-on features like Microsoft’s Windows Recall, or the AI-powered Windows Search. Power-sipping NPUs help laptops run these models without excessive battery drain. “With the NPU, the entire structure is really designed around the data type of tensors a multidimensional array of numbers],” said Steven Bathiche, technical fellow at Microsoft. “NPUs are much more specialized for that workload. And so we go from a CPU that can handle three trillion] operations per second (TOPS), to an NPU” in Qualcomm’s Snapdragon X chip, which can power Microsoft’s Copilot+ features. This includes [Windows Recall, which uses AI to create a searchable timeline of a user’s usage history by analyzing screenshots, and [Windows Photos’ Generative erase, which can remove the background or specific objects from an image. While Qualcomm was arguably the first to provide an NPU for Windows laptops, it kickstarted an NPU TOPS arms race that also includes AMD and Intel, and the competition is already pushing NPU performance upward. In 2023, prior to Qualcomm’s Snapdragon X, AMD chips with NPUs were uncommon, and those that existed delivered about 10 TOPS. Today, AMD and Intel have NPUs that are competitive with Snapdragon, providing 40 to 50 TOPS. Dell’s upcoming Pro Max Plus AI PC will up the ante with a Qualcomm AI 100 NPU that promises up to 350 TOPS, improving performance by a staggering 35 times compared with that of the best available NPUs just a few years ago. Drawing that line up and to the right implies that NPUs capable of thousands of TOPS are just a couple of years away. How many TOPS do you need to run state-of-the-art models with hundreds of millions of parameters? No one knows exactly. It’s not possible to run these models on today’s consumer hardware, so real-world tests just can’t be done. But it stands to reason that we’re within throwing distance of those capabilities. It’s also worth noting that LLMs are not the only use case for NPUs. Vinesh Sukumar, Qualcomm’s head of AI and machine learning product management, says AI image generation and manipulation is an example of a task that’s difficult without an NPU or high-end GPU. ## Building balanced chips for better AI Faster NPUs will handle more tokens per second, which in turn will deliver a faster, more fluid experience when using AI models. Yet there’s more to running AI on local hardware than throwing a bigger, better NPU at the problem. Mike Clark, corporate fellow design engineer at AMD, says that companies that design chips to accelerate AI on the PC can’t put all their bets on the NPU. That’s in part because AI isn’t a replacement for, but rather an addition to, the tasks a PC is expected to handle. “We must be good at low latency, at handling smaller data types, at branching code—traditional workloads. We can’t give that up, but we still want to be good at AI,” says Clark. He also noted that “the CPU is used to prepare data” for AI workloads, which means an inadequate CPU could become a bottleneck. NPUs must also compete or cooperate with GPUs. On the PC, that often means a high-end AMD or Nvidia GPU with large amounts of built-in memory. The Nvidia GeForce RTX 5090’s specifications quote an AI performance up to 3,352 TOPS, which leaves even the Qualcomm AI 100 in the dust. That comes with a big caveat, however: power. Though extremely capable, the RTX 5090 is designed to draw up to 575 watts on its own. Mobile versions for laptops are more miserly but still draw up to 175 W, which can quickly drain a laptop battery. Simon Ng, client AI product manager at Intel, says the company is “seeing that the NPU will just do things much more efficiently at lower power.” Rakesh Anigundi, AMD’s director of product management for Ryzen AI, agrees. He adds that low-power operation is particularly important because AI workloads tend to take longer to run than other demanding tasks, like encoding a video or rendering graphics. “You’ll want to be running this for a longer period of time, such as an AI personal assistant, which could be always active and listening for your command,” he says. These competing priorities mean chip architects and system designers will need to make tough calls about how to allocate silicon and power in AI PCs, especially those that often rely on battery power, such as laptops. “We have to be very deliberate in how we design our system-on-a-chip to ensure that a larger SoC can perform to our requirements in a thin and light form factor,” said Mahesh Subramony, senior fellow design engineer at AMD. ## When it comes to AI, memory matters Squeezing an NPU alongside a CPU and GPU will improve the average PC’s performance in AI tasks, but it’s not the only revolutionary change AI will force on PC architecture. There’s another that’s perhaps even more fundamental: memory. Most modern PCs have a divided memory architecture rooted in decisions made over 25 years ago. Limitations in bus bandwidth led GPUs (and other add-in cards that might require high-bandwidth memory) to move away from accessing a PC’s system memory and instead rely on the GPU’s own dedicated memory. As a result, powerful PCs typically have two pools of memory, system memory and graphics memory, which operate independently. That’s a problem for AI. Models require large amounts of memory, and the entire model must load into memory at once. The legacy PC architecture, which splits memory between the system and the GPU, is at odds with that requirement. “When I have a discrete GPU, I have a separate memory subsystem hanging off it,” explained Joe Macri, vice president and chief technology officer at AMD. “When I want to share data between our [CPU] and GPU, I’ve got to take the data out of my memory, slide it across the PCI Express bus, put it in the GPU memory, do my processing, then move it all back.” Macri said this increases power draw and leads to a sluggish user experience. The solution is a unified memory architecture that provides all system resources access to the same pool of memory over a fast, interconnected memory bus. Apple’s in-house silicon is perhaps the most well-known recent example of a chip with a unified memory architecture. However, unified memory is otherwise rare in modern PCs. AMD is following suit in the laptop space. The company announced a new line of APUs targeted at high-end laptops, Ryzen AI Max, at CES (Consumer Electronics Show) 2025. Ryzen AI Max places the company’s Ryzen CPU cores on the same silicon as Radeon-branded GPU cores, plus an NPU rated at 50 TOPS, on a single piece of silicon with a unified memory architecture. Because of this, the CPU, GPU, and NPU can all access up to a maximum of 128 GB of system memory, which is shared among all three. AMD believes this strategy is ideal for memory and performance management in consumer PCs. “By bringing it all under a single thermal head, the entire power envelope becomes something that we can manage,” said Subramony. The Ryzen AI Max is already available in several laptops, including the HP Zbook Ultra G1a and the Asus ROG Flow Z13. It also powers the Framework Desktop and several mini desktops from less well-known brands, such as the GMKtec EVO-X2 AI mini PC. Intel and Nvidia will also join this party, though in an unexpected way. In September, the former rivals announced an alliance to sell chips that pair Intel CPU cores with Nvidia GPU cores. While the details are still under wraps, the chip architecture will likely include unified memory and an Intel NPU. Chips like these stand to drastically change PC architecture if they catch on. They’ll offer access to much larger pools of memory than before and integrate the CPU, GPU, and NPU into one piece of silicon that can be closely monitored and controlled. These factors should make it easier to shuffle an AI workload to the hardware best suited to execute it at a given moment. Unfortunately, they’ll also make PC upgrades and repairs more difficult, as chips with a unified memory architecture typically bundle the CPU, GPU, NPU, and memory into a single, physically inseparable package on a PC mainboard. That’s in contrast with traditional PCs, where the CPU, GPU, and memory can be replaced individually. ## Microsoft’s bullish take on AI is rewriting Windows MacOS is well regarded for its attractive, intuitive user interface, and Apple Silicon chips have a unified memory architecture that can prove useful for AI. HHowever, Apple’s GPUs aren’t as capable as the best ones used in PCs, and its AI tools for developers are less widely adopted. Chrissie Cremers, cofounder of the AI-focused marketing firm Aigency Amsterdam, told me earlier this year that although she prefers macOS, her agency doesn’t use Mac computers for AI work. “The GPU in my Mac desktop can hardly manage [our AI workflow], and it’s not an old computer,” she said. “I’d love for them to catch up here, because they used to be the creative tool.” Dan Page That leaves an opening for competitors to become the go-to choice for AI on the PC—and Microsoft knows it. Microsoft launched Copilot+ PCs at the company’s 2024 Build developer conference. The launch had problems, most notably the botched release of its key feature, Windows Recall, which uses AI to help users search through anything they’ve seen or heard on their PC. Still, the launch was successful in pushing the PC industry toward NPUs, as AMD and Intel both introduced new laptop chips with upgraded NPUs in late 2024. At Build 2025, Microsoft also revealed Windows’ AI Foundry Local, a “runtime stack” that includes a catalog of popular open-source large language models. While Microsoft’s own models are available, the catalog includes thousands of open-source models from Alibaba, DeepSeek, Meta, Mistral AI, Nvidia, OpenAI, Stability AI, xAI, and more. Once a model is selected and implemented into an app, Windows executes AI tasks on local hardware through the Windows ML runtime, which automatically directs AI tasks to the CPU, GPU, or NPU hardware best suited for the job. AI Foundry also provides APIs for local knowledge retrieval and low-rank adaptation (LoRA), advanced features that let developers customize the data an AI model can reference and how it responds. Microsoft also announced support for on-device semantic search and retrieval-augmented generation, features that help developers build AI tools that reference specific on-device information. “[AI Foundry] is about being smart. It’s about using all the processors at hand, being efficient, and prioritizing workloads across the CPU, the NPU, and so on. There’s a lot of opportunity and runway to improve,” said Bathiche. ### Toward AGI on PCs The rapid evolution of AI-capable PC hardware represents more than just an incremental upgrade. It signals a coming shift in the PC industry that’s likely to wipe away the last vestiges of the PC architectures designed in the ’80s, ’90s, and early 2000s. The combination of increasingly powerful NPUs, unified memory architectures, and sophisticated software-optimization techniques is closing the performance gap between local and cloud-based AI at a pace that has surprised even industry insiders, such as Bathiche. It will also nudge chip designers toward ever-more-integrated chips that have a unified memory subsystem and to bring the CPU, GPU, and NPU onto a single piece of silicon—even in high-end laptops and desktops. AMD’s Subramony said the goal is to have users “carrying a mini workstation in your hand, whether it’s for AI workloads, or for high compute. You won’t have to go to the cloud.” A change that massive won’t happen overnight. Still, it’s clear that many in the PC industry are committed to reinventing the computers we use every day in a way that optimizes for AI. Qualcomm’s Vinesh Sukumar even believes affordable consumer laptops, much like data centers, should aim for AGI. “I want a complete artificial general intelligence running on Qualcomm devices,” he said. “That’s what we’re trying to push for.”
spectrum.ieee.org
November 17, 2025 at 10:06 PM
Students Compete—and Cooperate—in FIRST Global Robotics Challenge
Aspiring engineers from 191 countries gathered in Panama City in October to compete in the FIRST Global Robotics Challenge. The annual contest aims to foster problem-solving, cooperation, and inspire the next generation of engineers through three challenges that are inspired by a different theme every year. Teams of students from 14 to 18 years old from around the world compete in the three day event, remotely operating their robots to complete the challenges. This year’s topic was “Eco-equilibrium,” emphasizing the importance of preserving ecosystems and protecting vulnerable species. ## Turning Robotics Into a Sport Each team competed in a series of ranking matches at the event. The matches consisted of several simultaneous goals, lasting two minutes and 30 seconds. First students guided their robots in gathering “biodiversity units” (multicolored balls) and delivering them to their humans. Next the robots removed “barriers” (larger, grey balls) from containers and disposed of them in a set area. Then team members threw the biodiversity units into the now-cleared containers to score points. At the end of the match, each robot was tasked with climbing a 1.5 meter rope. The team with the most points won the match. To promote collaboration, each match had two groups, which consisted of three individual teams and their robots, competing for victory. Each team controlled its own robot, but had to work together with the other robots in the group to complete the tasks. If all six robots managed to climb the rope at the end of the match, each team’s scores were multiplied by 1.5. The top 24 teams were split into six “alliances” of four individual teams each to compete in the playoffs. The highest-scoring alliance was crowned the winner. This year’s winning teams were Cameroon, Mexico, Panama, and Venezuela. Each student received a gold medal. It may have been hard to tell it was a competition at first glance. When all six robots successfully climbed the rope at the end of the match, students across teams were hugging each other, clapping, and cheering. “It’s not about winning, it’s not about losing, it’s about learning from others,” says Clyde Snyders, a member of the South Africa team. His sentiment was echoed throughout the event. ## Making It Into the Competition Before the main event, countries all over the world run qualifying events where thousands of students show off their robotics skills for a chance to make it to the final competition. Each country chooses its team differently. Some pick the top-scoring team to compete, while others pick students from different teams to create a new one. Even after qualifying, for some students, physically getting to the competition isn’t straightforward. This year Team Jamaica faced challenges after Hurricane Melissa struck the country on 28 October, one day before the competition began. It was the strongest storm that has ever hit Jamaica, killing 32 people and leaving billions of dollars in infrastructure repairs. Because of the damage, the Jamaican team faced repeatedly cancelled flights and other travel delays. They almost didn’t make it, but FIRST Global organizers covered the costs of their travel. The students arrived on the second day, just in time to participate in enough matches to avoid being disqualified. Team Jamaica arrived late due to Hurricane Melissa, but they remained positive. Kohava Mendelsohn “We are so happy to be here,” says Joelle Wright, the team captain. “To be able to engage in new activities, to compete, and to be able to showcase our hard work.” Team Jamaica won a bronze medal. ## Working Together to Fix and Improve Robots Throughout the competition, it was a regular occurrence to see students from different teams huddled together, debugging problems, sharing tips, and learning together. Students were constantly fixing their robots and adding new features at the event’s robot hospital. There, teams could request spare parts, get help from volunteers, and access the tools they need. Volunteering in the robot hospital is demanding, but rewarding, says Janet Kapito, an electrical engineer and the operations manager at Robotics Foundation Malawi in Blantyre. She participated in the FIRST Global Challenge when she was a student. “[The volunteers] get to see different perspectives and understand how people think differently,” she says. It’s rewarding to watch students solve problems on their own, she adds. The hospital was home to many high-stress situations, especially on the first day of the competition. The Ecuadorian team’s robot was delayed in transit. So, using the robot hospital’s parts, the students built a new robot to compete with. Tanzanian team members were hard at work repairing their robot, which was having issues with the mechanism that allowed it to climb up the rope. Collaboration played a key role in the hospital. When the South African team’s robot was having mechanical problems, the students weren’t fixing it alone—several teams, including Venezuela, Slovenia, and India, came to help. “It was truly inspirational, and such a great effort in bringing teams from over 190 countries to come and collaborate,” says Joseph Wei, director of IEEE Region 6, who was in attendance at the event. ## The Importance of Mentoring Future Engineers Behind every team were mentors and coaches who provided students with guidance and experience. Many of them were past participants who are invested in teaching the next generation of engineers. But the robots are designed and built by the students, says Rob Haake, a mentor for Team United States. He tried to stay as hands-off as possible in the engineering of the robot, he says, “so if you asked me to turn on the robot, I don’t even know how to do it.” Haake is the COO of window and door manufacturing company Weiland, Inc., in Norfolk, Neb. His passion is to teach kids the skills they need to build things. It’s important to teach students how to think critically and solve problems while also developing technical skills, he says, because those students are the future tech leaders. One major issue he sees is the lack of team mentors. If you’re an engineer, he says, “the best way to help [FIRST Global] grow is to call your local schools to ask if they have a robotics team, and if not, how you can help create one. “The answer may be a monetary donation or, more importantly, your time,” he says. The students you mentor may one day represent their country at a FIRST Robotics Challenge.
spectrum.ieee.org
November 15, 2025 at 2:13 PM
This Soft Robot Is 100% Edible, Including the Battery
While there are many useful questions to ask when encountering a new robot, “can I eat it” is generally not one of them. I say ‘generally,’ because edible robots are actually a thing—and not just edible in the sense that you can technically swallow them and suffer both the benefits and consequences, but __ingestible__ , where you can take a big bite out of the robot, chew it up, and swallow it. Yum. But so far these ingestible robots have included a very please-don’t-ingest-this asterisk: the motor and battery, which are definitely toxic and probably don’t taste all that good. The problem has been that soft, ingestible actuators run on gas pressure, requiring pumps and valves to function, neither of which are easy to make without plastic and metal. But in a new paper, researchers from Dario Floreano’s Laboratory of Intelligent Systems at EPFL in Switzerland have demonstrated ingestible versions of both of batteries and actuators, resulting in what is, as far as I know, the first entirely ingestible robot capable of controlled actuation. * * * EPFL Let’s start with the battery on this lil’ guy. In a broad sense, a battery is just a system for storing and releasing energy. In the case of this particular robot, the battery is made of gelatin and wax. It stores chemical energy in chambers containing liquid citric acid and baking soda, both of which you can safely eat. The citric acid is kept separate from the baking soda by a membrane, and enough pressure on the chamber containing the acid will puncture that membrane, allowing the acid to slowly drip onto the baking soda. This activates the battery and begins to generate CO2 gas, along with sodium citrate (common in all kinds of foods, from cheese to sour candy) as a byproduct. EPFL The CO2 gas travels through gelatin tubing into the actuator, which is of a fairly common soft robotic design that uses interconnected gas chambers on top of a slightly stiffer base that bends when pressurized. Pressurizing the actuator gets you one single actuation, but to make the actuator wiggle (wiggling being an absolutely necessary skill for any robot), the gas has to be cyclically released. The key to doing this is the other major innovation here: an ingestible valve. EPFL The valve operates based on the principle of snap-buckling, which means that it’s happiest in one shape (closed), but if you put it under enough pressure, it rapidly snaps open and then closed again once the pressure is released. The current version of the robot operates at about four bending cycles per minute over a period of a couple of minutes before the battery goes dead. And so there you go: a battery, a valve, and an actuator, all ingestible, makes for a little wiggly robot, also ingestible. Great! But __why__? “A potential use case for our system is to provide nutrition or medication for elusive animals, such as wild boars,” says lead author Bokeon Kwak. “Wild boars are attracted to live moving prey, and in our case, it’s the edible actuator that mimics it.” The concept is that you could infuse something like a swine flu vaccine into the robot. Because it’s cheap to manufacture, safe to deploy, completely biodegradable, and wiggly, it could potentially serve as an effective strategy for targeted mass delivery to the kind of animals that nobody wants to get close to. And it’s obviously not just wild boars—by tuning the size and motion characteristics of the robot, what triggers it, and its smell and taste, you could target pretty much any animal that finds wiggly things appealing. And that includes humans! Kwak says that if you were to eat this robot, the actuator and valve would taste a little bit sweet, since they have glycerol in them, with a texture like gummy candy. The pneumatic battery would be crunchy on the outside and sour on the inside (like a lemon) thanks to the citric acid. While this work doesn’t focus specifically on taste, the researchers have made other versions of the actuator that were flavored with grenadine. They served these actuators out to humans earlier this year, and are working on an ‘analysis of consumer experience’ which I can only assume is a requirement before announcing a partnership with Haribo. Eatability, though, is not the primary focus of the robot, says PI Dario Floreano. “If you look at it from the broader perspective of environmental and sustainable robotics, the pneumatic battery and valve system is a key enabling technology, because it’s compatible with all sorts of biodegradable pneumatic robots.” And even if you’re not particularly concerned with all the environmental stuff, which you really should be, in the context of large swarms of robots in the wild it’s critical to focus on simplicity and affordability just to be able to usefully scale. This is all part of the EU-funded RoboFood project, and Kwak is currently working on other edible robots. For example, the elastic snap-buckling behavior in this robot’s valve is sort of battery-like in that it’s storing and releasing elastic energy, and with some tweaking, Kwak is hoping that edible elastic power sources might be the key for tasty little jumping robots that jump right off the dessert plate and into your mouth. Edible Pneumatic Battery for Sustained and Repeated Robot Actuation, by Bokeon Kwak, Shuhang Zhang, Alexander Keller, Qiukai Qi, Jonathan Rossiter, and Dario Floreano from EPFL, is published in __Advanced Science__ .
spectrum.ieee.org
November 15, 2025 at 5:56 AM
Video Friday: DARPA Challenge Focuses on Heavy Lift Drones
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at _IEEE Spectrum_ robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ##### ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! > _Current multirotor drones provide simplicity, affordability, and ease of operation; however, their primary limitation is their low payload-to-weight ratio, which typically falls at 1:1 or less. The DARPA Lift Challenge aims to shatter the heavy lift bottleneck, seeking novel drone designs that can carry payloads more than four times their weight, which would revolutionize the way we use drones across all sectors._ [DARPA ] > _Huge milestone achieved! World’s first mass delivery of humanoid robots has completed! Hundreds of UBTECH Walker S2 have been delivered to our partners._ I really hope that’s not how they’re actually shipping their robots. [UBTECH ] There is absolutely no reason to give robots hands if you can just teach them to lasso stuff instead. [ArcLab ] > _Saddle Creek deployed Carter in its order fulfillment operation for a beauty client. It helps to automate and optimize tote delivery operations between multiple processing and labeling lines and more than 20 designated drop-off points. In this capacity, Carter functions as a flexible, non-integrated “virtual conveyor” that streamlines material flow without requiring fixed infrastructure._ [Robust.ai ] > _This is our latest work on an aerial–ground robot team, the first time a language–vision hierarchy achieves long-horizon navigation and manipulation on the real UAV + quadruped using only 2D cameras. The article is published open-access in Advanced Intelligent Systems._ [DRAGON Lab ] Thanks, Moju! I am pretty sure that you should not use a quadrupedal robot to transport your child. But only pretty sure, not totally certain. [DEEP Robotics ] > _Building Behavioral Foundation Models (BFMs) for humanoid robots has the potential to unify diverse control tasks under a single, promptable generalist policy. However, existing approaches are either exclusively deployed on simulated humanoid characters, or specialized to specific tasks such as tracking. We propose BFM-Zero, a framework that learns an effective shared latent representation that embeds motions, goals, and rewards into a common space, enabling a single policy to be prompted for multiple downstream tasks without retraining._ [BFM-Zero ] Welcome to the very, very near future of manual labor. [AgileX ] > _MOMO (Mobile Object Manipulation Operator) has been one of KIMLAB’s key robots since its development about two years ago and has featured as a main actor in several of our videos. The design and functionalities of MOMO were recently published in IEEE Robotics & Automation Magazine._ Paper ] via [ [KIMLAB ] > _We are excited about our new addition to our robot fleet! As a shared resource for our faculty members, this robot will facilitate multiple research activities within our institute that target significant future funding. Our initial focus for this robot will be on an agricultural application but we have big plans for the robot in human-robot interaction projects._ [Ingenuity Labs ] The nice thing about robots that pick grapes in vineyards is that they don’t just eat the grapes, like I do. [Extend Robotics ] How mobile of a mobile manipulator do you need? [Clearpath Robotics ] > _Robotics professor, Dr. Christian Hubicki, talks about the NEO humanoid announcement on October 29th, 2025. While explaining the technical elements and product readiness, he refuses to show any emotion whatsoever._ [Optimal Robotics Lab ]
spectrum.ieee.org
November 15, 2025 at 5:56 AM
Apple’s Failed Foray Into Mac Clones
There’s a class of consumer that wants something they know they cannot have. For some of those people, a Macintosh computer not made by Apple has long been a desired goal. For most of the Mac’s history, you could only really get one from Apple, if you wanted to go completely by the book. Sure, there were less-legit ways to get Apple software on off-brand hardware, and plenty of people were willing to try them. But there was a short period, roughly 36 months, when it was possible to get a licensed Mac that had the blessing of the team in Cupertino. They called it the Mac clone era. It was Apple’s direct response to a PC market that had come to embrace open architectures—and, over time, made Apple’s own offerings seem small. During that period, from early 1995 to late 1997, you could get legally licensed Macs from a series of startups now forgotten to history, as well as one of Apple’s own major suppliers at the time, Motorola. And it was great for bargain hunters who, for perhaps the first time in Apple’s history, had a legit way to avoid the Apple tax. But that period ended fairly quickly, in large part thanks to the man whose fundamental aversion to clone-makers likely caused the no-clones policy in the first place: Steve Jobs. “It was the dumbest thing in the world to let companies making crappier hardware use our operating system and cut into our sales,” Jobs told Walter Isaacson in his 2011 biography. Apple has generally avoided giving up its golden goose because the company was built around vertical integration. If you went into a CompUSA and bought a Mac, you were buying the full package, hardware and software, made by Apple. This had benefits and downsides. Because Apple charged a premium for its devices (unlike other vertical integrators, such as Commodore and Atari), it tended to relegate the company to a smaller part of the market. On the other hand, that product was highly polished. That meant Apple needed to be good at two wildly disparate skill sets—and protect others from stealing Apple’s software prowess for their own cheaper hardware. While historians can point to the rise of unofficial Apple II clones in the ‘80s, and modern Apple fans can still technically build Hackintoshes on Intel hardware, Apple’s own Mac clone program came and went in just a few short years. It was a painful lesson. This Outbound Notebook wasn’t sold with Apple features, but allowed users to insert a Mac ROM, a component that helped Apple limit cloning. However, the ROM had to come from a genuine, working Apple computer. Chaosdruid/Wikimedia Commons ### Why Apple was afraid of Mac clones For years, companies attempted to wrangle the Mac out of Apple’s hands, corporate blessing or no. Apple, highly focused on vertical integration, used its ROM chips as a way to limit the flow of MacOS to potential clone-makers. This mostly worked, as the Mac’s operating system was far more complex and harder to reverse-engineer than the firmware used by the IBM PC. But plenty still tried. For example, a Brazilian company named Unitron sold a direct clone of the Macintosh 512K, which fell off the market only after a Brazilian trade body intervened. Later, a company named Akkord Technology attempted to sell a reverse-engineered device called the Jonathan, but ended up attracting a police raid instead. Somewhat more concerning for Apple’s exclusivity: Early Macs shared much of their hardware architecture with other popular machines, particularly the Commodore Amiga and Atari ST, each of which received peripherals that introduced Mac software support and made it easier to work across platforms. But despite claims that this ROM-based approach was technically legal, it’s not like any of this was explicitly allowed by Apple. At one point, __Infoworld__ responded to a letter to the editor about this phenomenon with a curt note: “Apple continually reaffirms its intention to protect its ROM and to prevent the cloning of the Mac.” So what __was__ Apple OK with? Full-on conversions, which took the hardware of an existing Mac, and rejiggered its many parts into an entirely new product. There are many examples of this throughout Apple’s history—such as the ModBook, a pre-iPad Mac tablet—but the idea started with Chuck Colby. Colby, an early PC clone-maker who was friends with Apple team members like Steve Wozniak, was already offering a portable Mac conversion called the MacColby at one of the Mac’s introductory events in 1984. (Apparently Apple CEO John Sculley bought two—but never paid for them.) One of Colby’s later conversions, the semi-portable Walkmac, had earned a significant niche audience. A 2013 __CNET__ piece notes that the rock band Grateful Dead and news anchor Peter Jennings were both customers, and that Apple would actually send Colby referrals. So, why did Colby get the red-carpet treatment while other clone-makers were facing lawsuits and police raids? You still needed a Mac to do the aftermarket surgery, so Apple still got its cut. One has to wonder: Would Apple have been better off just giving Chuck Colby, or any other interested party, a license to make their own clones? After all, it’s not like Colby’s ultra-niche portables were going to compete with Apple’s experience. Right? During the 1980s, this argument was basically a nonstarter—the company even went so far as to change its dealer policy to limit the resale of its system ROMs for non-repair purposes. But by the 1990s, things were beginning to thaw. You can thank a firm named NuTek for the nudge.**** The company, like Apple, was based in Cupertino, California, and it spent years talking up its reverse-engineering plans. “Nutek will do for Mac users what the first IBM-compatible developers did in the early 1980s: open up the market to increased innovation and competition by enabling major independent third-party manufacture,” explained Benjamin Chou, the company’s CEO, in a 1991 __ComputerWorld__ piece. And by 1993, it had built a “good enough” analogue of the Mac that could run most, but not all, Mac programs. “We’ve tested the top 15 software applications and 13 of those worked,” Chou told __InfoWorld__, an impressive boast until you hear the non-working apps are Microsoft Works and Excel. It failed to make a splash, but NuTek’s efforts nonetheless exposed a thaw in Apple’s thinking. A 1991 __MacWorld__ piece on NuTek’s reverse engineering attempt quoted Apple Chief Operating Officer Michael Spindler as saying, “It is not a question of whether Apple will license its operating system, but how it will do this.” Meanwhile, Windows was finally making inroads in the market, and Apple was ready to bend. ### The moment Apple changed its mind about clones There was a time when it looked like MacOS was about to become a Novell product. Really. In 1992, Apple held very serious talks with the networking software provider about selling, and it almost happened. ThenMichael Spindler became Apple’s CEO in 1993 and killed the Novell experiment, but not the idea of licensing MacOS. It just needed the right partner. It found one with Stephen Kahng’s Power Computing. Kahng, a veteran of the clone wars, first made waves in the PC market with the clone-maker Leading Edge, and he wanted to repeat that feat with the Mac. And his new firm, Power Computing, was offering an inroad for Apple to potentially score similar success. And so, in the waning days of 1994, just before the annual MacWorld conference, the news hit the wires: Apple was getting an authorized clone-maker. It turns out that the key was just to wait for the right CEO to take over, then ask nicely. Though the idea may have looked rosy at first, some saw some dark clouds over the whole thing. Famed tech columnist John C. Dvorak suggested that Kahng was more dangerous than he seemed. “Apple is not going to know what hit them,” he told __The New York Times__. And there were other signs that Apple was starting to lose its identity. A __PC Magazine__ analysis from early 1995 perhaps put the biggest frowny-face on the story: Apple’s decision to create a clone market may or may not be successful, but it didn’t really have a choice. At the recent MacWorld conference, one of the most popular technical seminars was given by Microsoft. It covered how Mac programmers can learn to write Windows applications. One can see why Apple might have been attracted to this model, in retrospect. The company was a bit lost in the market at the time, and needed a strategy to expand its shrinking base of users. But the clone market did not expand its base. Instead, it invited a price war. A PowerCenter Pro 210, a Macintosh clone manufactured by Power Computing Corporation.Angelgreat/Wikimedia Commons ### Why licensed Mac clones didn’t work The best time for Apple to introduce a clone program was probably a decade earlier, in 1985 or 1986. At the time, people like Chuck Colby were inventing new kinds of Macs that didn’t directly compete with what Apple was making. Furthermore, the concept of a Mac was new, just as desred for its form factor as its software. In hindsight, it’s clear that 1995 __wasn’t__ a good time to do so. The decision put a mirror against Apple’s own offerings, which attempted to hit every possible market segment—47 different device variants that year alone, per EveryMac. This didn’t reflect well on Apple—and companies like Power Computing exploited that to offer cheaper hardware. The company’s Power 100, for example, scored basically identical performance to the Macintosh 8100/100, while cutting more than US $1,000 off the Apple product’s $4,400 price tag. Meanwhile, other machines, such as the DayStar Genesis MP, outpaced Apple’s own ability to hit the high end. Both of these machines, in their own ways, hit at a key problem with Apple’s mid-’90s industrial design. Before the iMac revolutionized Apple computers upon its 1998 release, Macs simply didn’t have enough of a “wow factor” driving the industrial design. It made the Mac about the software, not the hardware. Within a year or two, it was clear that Apple had begun to undermine its own bottom line. When Chuck Colby put a Mac motherboard in a new chassis, Apple kept its high margins. But Power Computing’s beige boxes ate into Apple’s market share, and the MacOS-makers also got a far smaller cut. There likely was a magic point at which Power Computing’s scale would have made up for the loss in hardware revenue. But in the era of Windows 95, Apple needed a partner that would go toe-to-toe with Packard Bell. Instead, these cut-rate Macs only attracted the already converted, undercutting Apple along the way. “I would guess that somewhere around 99 percent of their sales went to the existing customer base,” then-CFO Fred Anderson told __Wired__ in 1997. The company only figured this part out after Steve Jobs returned to the fold. ### Apple’s retreat from cloning The course correction got messy: Jobs, in the midst of trying to fix this situation in his overly passionate way, might have harmed the evolution of the PowerPC chip, for example. A 1998 piece from the __Wall Street Journal__ notes that Jobs’ tough negotiations over clones damaged its relationship with Motorola, its primary CPU supplier, to the point the company pledged it would no longer go the extra mile for Apple. “They will be just another customer,” a Motorola employee told the paper. Power Computing—which had an apparent $500 million in revenue in 1996 alone—got a somewhat softer landing, though not without its share of drama. Apple had pushed the company to agree to a new licensing deal even before Jobs took over as CEO, and once he did, it was clear the companies would not see eye to eye. The company’s then-president, Joel Kocher, attempted to take the battle to MacWorld, where he forced a public confrontation over the issue. The board disagreed with Kocher’s actions, Kocher quit, and ultimately the company sold most of its assets to Apple for $100 million, effectively killing the market entirely. The only clone-maker that Apple seemed willing to play ball with was the company UMAX. The reason? Its SuperMac line had figured out how to hit the low-end market, an area Apple has famously struggled to hit. Apple wanted UMAX to focus on the sub-$1,000 market, especially in parts of the world where Apple lacks a foothold. But UMAX didn’t want the low-end if it couldn’t keep a foothold in the more lucrative high end, and it chose to dip out on its own. The situation highlighted the ultimate problems with cloning—a loss of control, and a lack of alignment between licensor and licensee. Apple restricted the licenses, making these System 7 clones, for the most part, restricted from (legally) upgrading to Mac OS 8. It did the trick—and starved the clone-makers out. ### The one time Steve Jobs flirted with a Mac clone That would be the end of the Apple clone story, except for one dangling thread: Steve Jobs once attempted to make an exception to his aversion to clones.In the early 2000s, Jobs pitched Sony on the idea of putting Mac OS X on its VAIO desktops and laptops, essentially because he felt it was the only product line that matched what Apple was doing from a visual standpoint. Jobs looked up to Sony and its cofounder Morita Akio, even offering a eulogy for Akio after his passing. (Nippon, upon Jobs’ passing, called the Apple founder’s appreciation for the country and its companies “a reciprocal love affair.”) But Sony had already done the work with Windows, so it wasn’t to be. On Sony’s part, it sounds like the kind of prudent decision Jobs made when he killed the clones a few years earlier.
spectrum.ieee.org
November 15, 2025 at 5:56 AM
Get to Know the IEEE Board of Directors
The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide while fulfilling the IEEE mission of advancing technology for the benefit of humanity. This article features IEEE Board of Directors members Antonio Luque, Ravinder Dahiya, and Joseph Wei. ## IEEE Senior Member Antonio Luque **Director and vice president, Member and Geographic Activities** Antonio LuqueAntonio Luque Luque is a professor of electronics engineering at the Universidad de Sevilla, Spain, where he mentors students on digital electronics, devices, and cyber-physical systems. His work has focused on electronics, sensors, and microsystems for biomedical applications. He also has worked on the creation of disposable smart microsystems for safe production of radiopharmaceuticals applied to medical imaging. More recently, Luque has been working on cybersecurity and connectivity applied to the Internet of Things and real-time systems. He holds master’s and doctorate degrees in electrical engineering from the Universidad de Sevilla. Luque has been an active IEEE volunteer since 2002, when he first became involved in the IEEE Industrial Electronics Society’s technical conferences, and developed software to streamline many of the society’s operations. He is also a member of the IEEE Electron Devices and IEEE Education societies. He was also a coordinator for the IEEE Young Professionals group for the IEEE Spain Section. He later served as section chair. Luque was elected as the Region 8 director in 2020–2021 and served as director and vice president of IEEE Member and Geographic Activities in 2024. He also has served on the IEEE Governance Committee and IEEE European Public Policy Committee. He served as an associate editor of the __IEEE Journal of Microelectromechanical Systems__ __ from 2013 to 2019 and the __IEEE Transactions on Industrial Electronics__ __ since 2014. During his career, he has authored 20 journal articles, 40 conference papers, three book chapters, and a textbook. Luque received the 2007 Young Researcher Award from the Academia Europaea, which recognizes promising young scholars at the post doctoral level. ## IEEE Fellow Ravinder Dahiya **Director, Division X** Ravinder DahiyaIEEE Sensors Council Dahiya is a professor of electrical and computer engineering at Northeastern University, in Boston. He leads the university’s Bendable Electronics and Sustainable Technologies group. His research interests include flexible and printed electronics, robotic tactile sensing, electronic skin technology, haptics, wearables, and intelligent interactive systems. Dahiya developed the first energy-generating tactile skin, which, in addition to providing touch feedback, generates energy that can operate actuators used by robots. His robotic tactile sensing research was recognized by IEEE through his elevation to the grade of Fellow. During the COVID-19 pandemic, Dahiya and his research team developed a low-cost DIY ventilator and a smart bandage that accelerated healing and helped detect the signs of coronavirus through respiratory feedback. He holds a bachelor’s degree in electrical engineering from the Kurukshetra University in Kurukshetra, India, a master’s degree in electrical engineering from the Indian Institute of Technology in Delhi, and a doctorate in humanoid technologies from Istituto Italiano di Technologia and Università di Genova, Italy. He served as 2022–2023 president of the IEEE Sensors Council, where he launched several initiatives including journals (e.g., IEEE Journal on Flexible Electronics and IEEE Journal of Selected Areas in Sensors), conferences (e.g., IEEE International Conference on Flexible Printable Sensors and Systems and IEEE Biosensors) and Sensors in Spotlight networking event. During his time as President, he led the 25th anniversary events of the IEEE Sensors Council. He was a member of the editorial board of the __IEEE Sensors Journal__ from 2012 to 2020, __IEEE Transactions on Robotics__ from 2011 to 2017 and founding editor-in-chief of __IEEE Journal on Flexible Electronics__ from 2022 to 2023. He has authored or coauthored more than 550 research publications, as well as eight books, and he has been granted several patents. He has presented more than 250 keynote addresses and lectures worldwide, including a 2016 TEDx talk on “Animating the Inanimate World.” Dahiya received an Engineering and Physical Sciences Research Council fellowship and a Marie Curie fellowship. He was recognized with the 2016 IEEE Sensors Council Technical Achievement Award. He is also Fellow of the Royal Society of Edinburgh. ## IEEE Senior Member Joseph Wei **Director, Region 6: Western U.S.** Joseph WeiTwo Dudes Photo/FMS Conference A veteran of Silicon Valley, Wei combines his more than 40 years of experience in the entrepreneurial and information technology space with his passion for investing in and mentoring startups. He is a frequent speaker at global startup conferences on entrepreneurship and technology. Wei’s commitment to driving innovation and technological advancement through mentorship has yielded significant results. One of his portfolio healthcare startups recently debuted on the public market, valued at over US $3 billion. He played a key role in advancing global connectivity through his involvement with the IEEE Standards Association and its development of the IEEE 802.11 standard. Wi-Fi has become the foundation of modern wireless communication, transforming industries, enabling the digital economy, and bridging communities worldwide. Wei’s career-long efforts to accelerate the widespread adoption of open-source software have helped empower businesses of all sizes to innovate, reduce their technology costs, and foster global collaboration in software development. He holds a bachelor’s degree in electrical engineering from Tufts University in Medford, Massachusetts. He has served as chair of the IEEE Santa Clara Valley (California) Section, Board of Governors, chair of IEEE Consumer Technology Society’s Santa Clara Valley Chapter, and chair of IEEE Engineering in Medicine and Biology Society chapter in California. In 2023 it became the society’s largest chapter. He received a special Section Award from the Santa Clara Valley Section in 2020 for his outstanding volunteerism and service as a positive role model, as well as a Director’s Special Award in 2015 for his outstanding performance as the Santa Clara Valley Section chair, managing and organizing this largest section in the world to make it more effective and for supporting major IEEE Region 6 initiatives. Wei credits the exceptional training and extensive network of experts he’s amassed through his IEEE volunteer work for enabling him to provide valuable insights, industry connections, and expertise that help him guide startups and innovators.
spectrum.ieee.org
November 13, 2025 at 9:14 PM
Two Visions for the Future of AR Smart Glasses
**Are you finally ready** to hang a computer screen on your face? Fifteen years ago, that would have seemed like a silly question. Then came the much-hyped and much-derided Google Glass in 2012, and frankly, it still seemed a silly question. Now, though, it’s a choice consumers are beginning to make. Tiny displays, shrinking processors, advanced battery designs, and wireless communications are coming together in a new generation of smart glasses that display information that’s actually useful right in front of you. But the big question remains: Just why would you want to do that? Some tech companies are betting that today’s smart glasses will be the perfect interface for delivering AI-supported information and other notifications. The other possibility is that smart glasses will replace bulky computer screens, acting instead as a private and portable monitor. But the companies pursuing these two approaches don’t yet know which choice consumers will make or what applications they really want. Smart-glasses skeptics will point to the fate of Google Glass, which was introduced in 2012 and quickly became a prime example of a pricey technology in search of practical applications. It had little to offer consumers, aside from being an aspirational product that was ostentatiously visible to others. (Some rude users were even derided as “glass-holes.”) While Glass was a success in specialized applications such as surgery and manufacturing until 2023—at least for those organizations that could afford to invest around a thousand dollars per pair—it lacked any compelling application for the average consumer. Smart-glasses technology may have improved since then, but the devices are still chasing a solid use case. From the tech behemoths to little brands you’ve never heard of, the hardware once again is out over its skis, searching for the right application. During a Meta earnings call in January, Mark Zuckerberg declared that 2025 “will be a defining year that determines if we’re on a path toward many hundreds of millions and eventually billions” of AI glasses. Part of that determination comes down to a choice of priorities: Should a head-worn display replicate the computer screens that we currently use, or should they work more like a smartwatch, which displays only limited amounts of information at a time? ## Virtual Reality vs. Augmented Reality Head-worn displays fall into two broad categories: those intended for virtual reality (VR) and those suited for augmented reality (AR). VR’s more-immersive approach found some early success in the consumer market, such as the Meta Quest 2 (originally released as the Oculus Quest 2 in 2020), which reportedly sold more than 20 million units before it was discontinued. According to the market research firm Counterpoint, however, the global market for VR devices fell by 12 percent year over year in 2024—the third year of decline in a row—because of hardware limitations and a lack of compelling use cases. As a mass consumer product, VR devices are probably past their moment. In contrast, AR devices allow the wearer to stay engaged with their surroundings as additional information is overlaid in the field of view. In earlier generations of smart glasses, this information added context to the scene, such as travel directions or performance data for athletes. Now, with advances in generative AI, AR can answer questions and translate speech and text in real time. Many analysts agree that AI-enhanced smart glasses are a market on the verge of massive growth. Louis Rosenberg, CEO and chief scientist with Unanimous AI, has been involved in AR technology from its start, more than 30 years ago. “AI-powered smart glasses are the first mainstream XR [extended reality] devices that are profoundly useful and will achieve rapid adoption,” Rosenberg told __IEEE Spectrum__. “This, in turn, will accelerate the adoption of immersive versions to follow. In fact, I believe that within five years, immersive AI-powered glasses will replace the smartphone as the primary mobile device in our digital lives.” RELATED: How a Parachute Accident Helped Jump-Start Augmented Reality Major tech companies, including Google and Apple, have announced their intentions to join this market, but have yet to ship a product. One exception is Meta, which released the Meta Ray-Ban Display in September, priced at US $799. (Ray-Ban Meta glasses without a display have been available since 2023.) A number of smaller companies, though, have forged the path for true AR smart glasses. Two of the most promising models—the One Pro from Beijing-based Xreal and the AI glasses from Halliday, based in Singapore—represent the two different design concepts evolving in today’s smart-glasses market. ## Halliday’s “Hidden Superpower” Halliday’s smart glasses are a lightweight, inconspicuousdevice that looks like everydayeyewear. The glasses have a single small microLED projector placed above the right lens. This imager beams a monochrome green image directly to your eye, with a level of light dim enough to be safe but bright enough to be seen against ambient light. What the user sees is a virtual 3.5-inch (8.9-centimeter) screen in the upper right corner of their field of view. Like a typical smartwatch screen, it can display up to 10 or so short lines of text and basic graphics, such as arrows when showing turn-by-turn navigation instructions, sufficient to provide an interface for an AI companion. In press materials, Halliday describes its glasses as “a hidden superpower to tackle life’s challenges.” And hidden it is. The display technology is much more discreet than those of other designs that use waveguides or prismatic lenses, which will often reveal a reflected image or a noticeable rainbow effect. Because it projects the image directly to the eye, the Halliday device doesn’t produce any such indications. The glasses can even be fitted with standard prescription lenses. ## Xreal’s One Pro: The Full Picture By contrast, the Xreal One Pro has two separate imagers—one for each eye—that show full-color, 1,080-pixel images that fill 57 degrees of the user’s field of view. This allows the One Pro to display the same content you’d see on a notebook or desktop screen. (A more typical field of view for AR glasses is 30 to 45 degrees. Halliday’s virtual screen occupies only a small portion of the user’s field of view.) Xreal’s One Pro smart glasses consist of many layers that work together to create a full-color, high-resolution display.Xreal In fact, the One Pro is intended to eliminate those notebook and desktop screens. “We’re now at the point where AR glasses’ spatial screens can truly replace physical monitors all day long,” Xreal CEO and cofounder Chi Xu said in a December 2024 press release. But it’s not a solution for use when you’re out and about; the glasses remain tethered to your computer or mobile device by a cable. The glasses use microLED imagers that deliver good color and contrast performance, along with lower power consumption than an OLED. They also use a “flat prism” lens that is 11 millimeters thick—less than half the thickness of the prisms in some other AR smart glasses, but three to four times as thick as typical prescription lenses. The flat-prism technology is similar to the “bird bath” prisms in Xreal’s previous glasses, which used a curved surface to reflect the display image to the wearer’s eye, but the flat prism’s thinner and lighter design offers a larger field of view. It also has advantages over the refraction-based waveguides used by other glasses, which can introduce visible artifacts such as colored halos. In order to improve the visibility of the projected image, the glasses block much of the ambient light from the surroundings. Karl Guttag, a display-industry expert and author of the KGOnTech blog, says that the Xreal One Pro blocks about 78 percent of real-world light and are “like dark sunglasses.” The One Pro also has a built-in spatial computer coprocessor, which enables the glasses to position an image relative to a direction in your view. For example, if you have an application that shows one set of information to your left, another in the middle, and a third to the right, you would simply turn your head to look at a different set. Or you could position an image in a fixed location—as with the Halliday glasses—so that it remains in front of you when you turn your head. Having separate imagers for each eye makes it possible to create stereoscopic 3D effects. That means you could view a 3D object in a fixed location in your room, making for a more immersive experience. ## The Cost of More-Immersive AR All these features come at a cost. Xreal’s glasses draw too much power to be run by battery, and they need a high-speed data connection to access the display data on a laptop or desktop computer. This connection provides power and enables high-resolution video streaming to the glasses, but it keeps the user tethered to the host device. The Halliday glasses, by contrast, run off a battery that the company states can last up to 12 hours between charges. Another key difference is weight. Early AR glasses were so heavy that they were uncomfortable to wear for long periods of time. The One Pro is relatively light at 87 grams, or a little less than the weight of a small smartphone. But the Halliday’s simpler design and direct projector yields a device that’s less than half that at 35 grams—a weight similar to that of many regular prescription glasses. Customers try out Halliday’s smart glasses at an expo in China in July 2025. Ying Tang/NurPhoto/Getty Images In both cases, this new generation of consumer-oriented smart glasses costs much less than enterprise AR systems, which cost several thousands of dollars. The One Pro lists for $649, while the Halliday lists for $499. Currently, neither Halliday nor Xreal has a camera built into its glasses, which instead communicate through voice control and audio feedback. This eliminates extra weight and power consumption, helps keep costs down, and sidesteps the privacy concerns that proved to be one of the main sticking points for Google Glass. There are certainly applications where a camera can be helpful, however, such as for image recognition or when users with impaired vision want to hear the text of signs read aloud. Xreal does offer an optional high-resolution camera module that mounts at the bridge of the nose. Whether to include a built-in camera in future models is yet another trade-off these companies will need to consider. ## What Do Consumers Really Want in Smart Glasses? Clearly, these two models of smart glasses represent very different design strategies and applications. The Halliday glasses exist largely as a mobile platform for an AI companion that you can use discretely throughout the day, the way you would use a smartwatch. The One Pro, on the other hand, can act as a replacement for your computer’s monitor—or several monitors, thanks to the spatial computing feature. The high resolution and full color deliver the same information that you’re used to getting from the larger displays, with the trade-off that you’re physically tethered to your computer. Are either of these scenarios the killer app for smart glasses that we’ve been waiting for? With the rise of generative AI agents, people are growing increasingly comfortable with easy access to all sorts of information all the time. Smart speakers such as Amazon Echo have trained us to get answers to just about anything simply by asking. Wearing a device on your face that can discreetly present information on demand, like Halliday’s glasses, will certainly appeal to some consumers, especially when it’s priced affordably. Chris Chinnock, founder of Insight Media, thinks this is the path for the future. “I am not convinced that a display is needed for a lot of applications, or if you have a display, a small [field of view] version is sufficient. I think audio glasses coupled with AI capabilities could be very interesting in the near term, as the optics [or the] display for more full-featured AR glasses are developed.” On the other hand, many people may be seeking a compact and convenient alternative to the large, bulky external monitors that come with today’s laptop and desktops. On an airplane, for instance, it’s difficult to comfortably open your laptop screen enough to see it, and there’s little expectation of privacy on a crowded flight. But with smart glasses that project multiple virtual screens, you may actually be able to do some useful work on a long flight. For now, companies like Halliday and Xreal are hoping that there’s room for both strategies in the consumer market. And with multiple choices now available at consumer-friendly prices, we will soon start to see how much interest there is. Will consumers choose the smart AI companion, or a compact and private replacement for computer screens? In either case, your glasses are likely to become a lot smarter. Model| Display technology| Number of displays| Resolution| Price (US $) ---|---|---|---|--- Halliday| Monochrome microLED| Right eye only| Low| 499 Xreal One Pro| Full-color microLED| Both eyes| High| 649 Meta Ray-Ban Display| Full-color liquid crystal on silicon (LCoS)| Right eye only| High| 799 TCL RayNeo X3 Pro| Full color microLED| Both eyes| High| 1,250 Even Realities Even G1| Monochrome microLED| Both eyes| Low| 599 Rokid Max 2| Full color micro-OLED| Both eyes| High| 529
spectrum.ieee.org
November 13, 2025 at 9:14 PM
China’s Tech Giants Race to Replace Nvidia’s AI Chips
_This post originally appeared onRecode China AI._ For more than a decade, Nvidia’s chips have been the beating heart of China’s AI ecosystem. Its GPUs powered search engines, video apps, smartphones, electric vehicles, and the current wave of generative AI models. Even as Washington tightened export rules for advanced AI chips, Chinese companies kept settling for and buying “China-only” Nvidia chips stripped of their most advanced features—H800, A800, and H20. But by 2025, patience in Beijing had seemingly snapped. State media began labeling Nvidia’s China-compliant H20 as unsafe and possibly compromised with hidden “backdoors.” Regulators summoned company executives for questioning, while reports from _The Financial Times_ surfaced that tech companies like Alibaba and ByteDance were quietly told to cancel new Nvidia GPU orders. The Chinese AI startup DeepSeek also signaled in August that its next model will be designed to run on China’s “next-generation” domestic AI chips. The message was clear: China could no longer bet its AI future on an U.S. supplier. If Nvidia wouldn’t—or couldn’t—sell its best hardware in China, domestic alternatives must fill the void by designing specialized chips for both AI training (building models) and AI inference (running them). That’s difficult—in fact, some say it’s impossible. Nvidia’s chips set the global benchmark for AI computing power. Matching them requires not just raw silicon performance but memory, interconnection bandwidth, software ecosystems, and above all, production capacity at scale. Still, a few contenders have emerged as China’s best hope: Huawei, Alibaba, Baidu, and Cambricon. Each tells a different story about China’s bid to reinvent its AI hardware stack. ## Huawei’s AI Chips Are in the Lead Huawei is betting on rack-scale supercomputing clusters that pool thousands of chips together for massive gains in computing power. VCG/Getty Images If Nvidia is out, Huawei, one of China’s largest tech companies, looks like the natural replacement. Its Ascend line of AI chips has matured under the U.S. sanctions, and in September 2025 the company laid out a multi-year public roadmap: * Ascend 950, expected in 2026 with a performance target of 1 petaflop in the low-precision FP8 format that’s commonly used in AI chips. It will have 128 to 144 gigabytes of on-chip memory, and interconnect bandwidths (a measure of how fast it moves data between components) of up to 2 terabytes per second. * Ascend 960, expected in 2027, is projected to double the 950’s capabilities. * Ascend 970 is further down the line, and promises significant leaps in both compute power and memory bandwidth. The current offering is the Ascend 910B, introduced after U.S. sanctions cut Huawei off from global suppliers. Roughly comparable to the A100, Nvidia’s top chip in 2020, it became the de facto option for companies who couldn’t get Nvidia’s GPUs. One Huawei official even claimed the 910B outperformed the A100 by around 20 percent in some training tasks in 2024. But the chip still relies on an older type of high-speed memory (HBM2E), and can’t match Nvidia’s H20: It holds about a third less data in memory and transfers data between chips about 40 percent more slowly. The company’s latest answer is the 910C, a dual-chiplet design that fuses two 910Bs. In theory, it can approach the performance of Nvidia’s H100 chip (Nvidia’s flagship chip until 2024); Huawei showcased a 384-chip Atlas 900 A3 SuperPoD cluster that reached roughly 300 Pflops of compute, implying that each 910C can deliver just under 800 teraflops when performing calculations in the FP16 format. That’s still shy of the H100’s roughly 2,000 Tflops, but it’s enough to train large-scale models if deployed at scale. In fact, Huawei has detailed how they used Ascend AI chips to train DeepSeek-like models. To address the performance gap at the single-chip level, Huawei is betting on rack-scale supercomputing clusters that pool thousands of chips together for massive gains in computing power. Building on its Atlas 900 A3 SuperPoD, the company plans to launch the Atlas 950 SuperPoD in 2026, linking 8,192 Ascend chips to deliver 8 exaflops of FP8 performance, backed by 1,152 TB of memory and 16.3 petabytes per second of interconnect bandwidth. The cluster will span a footprint larger than two full basketball courts. Looking further ahead, Huawei’s Atlas 960 SuperPoD is set to scale up to 15,488 Ascend chips. Hardware isn’t Huawei’s only play. Its MindSpore deep learning framework and lower-level CANN software are designed to lock customers into its ecosystem, offering a domestic alternative to PyTorch (a popular framework from Meta) and CUDA (Nvidia’s platform for programming GPUs) respectively. State-backed firms and U.S.-sanctioned companies like iFlytek, 360, and SenseTime have already signed on as Huawei clients. The Chinese tech giants ByteDance and Baidu also ordered small batches of chips for trial. Yet Huawei isn’t an automatic winner. Chinese telecom operators such as China Mobile and Unicom, which are also responsible for building China’s data centers, remain wary of Huawei’s influence. They often prefer to mix GPUs and AI chips from different suppliers rather than fully commit to Huawei. Big internet platforms, meanwhile, worry that partnering too closely could hand Huawei leverage over their own intellectual property. Even so, Huawei is better positioned than ever to take on Nvidia. ## Alibaba Pushes AI Chips to Protect Its Cloud Business Alibaba Cloud’s business depends on reliable access to training-grade AI chips. So it’s making its own. Sun Pengxiong/VCG/Getty Images Alibaba’s chip unit, T-Head, was founded in 2018 with modest ambitions around open-source RISC-V processors and data center servers. Today, it’s emerging as one of China’s most aggressive bids to compete with Nvidia. T-Head’s first AI chip is the Hanguang 800 chip, an efficient chip designed for AI inference that was announced in 2019; it’s able to process 78,000 images per second and optimize recommendation algorithms and large language models (LLMs). Built on a 12-nanometer process with around 17 billion transistors, the chip can perform up to 820 trillion operations per second (TOPS) and access its memory at speeds of around 512 GB per second. But its latest design—the PPU chip—is something else entirely. Built with 96 GB of high-bandwidth memory and support for high-speed PCIe 5.0 connections, the PPU is pitched as a direct rival to Nvidia’s H20. During a state-backed television program featuring a China Unicom data center, the PPU was presented as capable of rivaling Nvidia’s H20. Reports suggest this data center runs over 16,000 PPUs out of 22,000 chips in total. _The Information_ also reported that Alibaba has been using its AI chips to train LLMs. Besides chips, Alibaba Cloud lately also upgraded its supernode server, named Panjiu, which now features 128 AI chips per rack, modular design for easy upgrades, and fully liquid cooling. For Alibaba, the motivation is as much about cloud dominance as national policy. Its Alibaba Cloud business depends on reliable access to training-grade chips. By making its own silicon competitive with Nvidia’s, Alibaba keeps its infrastructure roadmap under its own control. ## Baidu’s Big Chip Reveal in 2025 At a recent developer conference, Baidu unveiled a 30,000-chip cluster powered by its third-generation P800 processors.Qilai Shen/Bloomberg/Getty Images Baidu’s chip story began long before today’s AI frenzy. As early as 2011, the search giant was experimenting with field-programmable gate arrays (FPGAs) to accelerate its deep learning workloads for search and advertising. That internal project later grew into Kunlun. The first generation arrived in 2018. Kunlun 1 was built on Samsung’s 14-nm process, and delivered around 260 TOPS with a peak memory bandwidth of 512 GB per second. Three years later came Kunlun 2, a modest upgrade. Fabricated on a 7-nm node, it pushed performance to 256 TOPS for low-precision INT8 calculations and 128 Tflops for FP16, all while reducing power to about 120 watts. Baidu aimed this second generation less at training and more at inference-heavy tasks such as its Apollo autonomous cars and Baidu AI Cloud services. Also in 2021, Baidu spun off Kunlun into an independent company called Kunlunxin, which was then valued at US $2 billion. For years, little surfaced about Kunlun’s progress. But that changed dramatically in 2025. At its developer conference, Baidu unveiled a 30,000-chip cluster powered by its third-generation P800 processors. Each P800 chip, according to research by Guosen Securities, reaches roughly 345 Tflops at FP16, putting it in the same level as Huawei’s 910B and Nvidia’s A100. Its interconnect bandwidth is reportedly close to Nvidia’s H20. Baidu pitched the system as capable of training “DeepSeek-like” models with hundreds of billions of parameters. Baidu’s latest multimodal models, the Qianfan-VL family of models with 3 billion, 8 billion, and 70 billion parameters, were all trained on its Kunlun P800 chips. Kunlun’s ambitions extend beyond Baidu’s internal demands. This year, Kunlun chips secured orders worth over 1 billion yuan (about $139 million) for China Mobile’s AI projects. That news helped restore investor confidence: Baidu’s stock is up 64 percent this year, with the Kunlun reveal playing a central role in that rise. Just today, Baidu announced its roadmap for its AI chips, promising to roll out a new product every year for the next five years. In 2026, the company will launch the M100, optimized for large-scale inference, and in 2027 the M300 will arrive, optimized for training and inference of massive multimodal models. Baidu hasn’t yet released details about the chips’ parameters. Still, challenges loom. Samsung has been Baidu’s foundry partner from day one, producing Kunlun chips on advanced process nodes. Yet reports from Seoul suggest Samsung has paused production of Baidu’s 4-nm designs. ## Cambricon’s Chip Moves Make Waves in the Stock Market Cambricon struggled in the early 2020s, with chips like the MLU 290 that couldn’t compete with Nvidia chips. CFOTO/Future Publishing/Getty Images The chip company Cambricon is probably the best performing publicly traded company on China’s domestic stock market. Over the past 12 months, Cambricon’s share price has jumped nearly 500 percent. The company was officially spun out of the Chinese Academy of Sciences in 2016, but its roots stretch back to a 2008 research program focused on brain-inspired processors for deep learning. By the mid-2010s, the founders believed AI-specific chips were the future. In its early years, Cambricon focused on accelerators called neural processing units (NPUs) for both mobile devices and servers. Huawei was a crucial first customer, licensing Cambricon’s designs for its Kirin mobile processors. But as Huawei pivoted to develop its own chips, Cambricon lost a flagship partner, forcing it to expand quickly into edge and cloud accelerators. Backing from Alibaba, Lenovo, iFlytek, and major state-linked funds helped push Cambricon’s valuation to $2.5 billion by 2018 and eventually landing it on Shanghai’s Nasdaq-like STAR Market in 2020. The next few years were rough. Revenues fell, investors pulled back, and the company bled cash while struggling to keep up with Nvidia’s breakneck pace. For a while, Cambricon looked like another cautionary tale of Chinese semiconductor ambition. But by late 2024, fortunes began to change. The company returned to profitability, thanks in large part to its newest MLU series of chips. That product line has steadily matured. The MLU 290, built on a 7-nm process with 46 billion transistors, was designed for hybrid training and inference tasks, with interconnect technology that could scale to clusters of more than 1,000 chips. The follow-up MLU 370, the last version before Cambricon was sanctioned by the United States government in 2022, can reach 96 Tflops at FP16. Cambricon’s real deal came with the MLU 590 in 2023. The 590 was built on 7-nm and delivered peak performance of 345 Tflops at FP16, with some reports suggesting it could even surpass Nvidia’s H20 in certain scenarios. Importantly, it introduced support for less-precise data formats like FP8, which eased memory bandwidth pressure and boosted efficiency. This chip didn’t just mark a leap—it turned Cambricon’s finances around, restoring confidence that the company could deliver commercially viable products. Now all eyes are on the MLU 690, currently in development. Industry chatter suggests it could approach, or even rival, Nvidia’s H100 in some metrics. Expected upgrades include denser compute cores, stronger memory bandwidth, and further refinements in FP8 support. If successful, it would catapult Cambricon from “domestic alternative” status to a genuine competitor at the global frontier. Cambricon still faces hurdles: its chips aren’t yet produced at the same scale as Huawei’s or Alibaba’s, and past instability makes buyers cautious. But symbolically, its comeback matters. Once dismissed as a struggling startup, Cambricon is now seen as proof that China’s domestic chip path can yield profitable, high-performance products. ## A Geopolitical Tug-of-War At its core, the battle over Nvidia’s place in China isn’t really about teraflops or bandwidth. It’s about control. Washington sees chip restrictions as a way to protect national security and slow Beijing’s advance in AI. Beijing sees rejecting Nvidia as a way to reduce strategic vulnerability, even if it means temporarily living with less powerful hardware. China’s big four contenders, Huawei, Alibaba, Baidu, and Cambricon, along with other smaller players such as Biren, Muxi, and Suiyuan, don’t yet offer the real substitutes. Most of their offerings are barely comparable with A100, Nvidia’s best chips five years ago, and they are working to catch up with H100, which was available three years ago. Each player is also bundling its chips with proprietary software and stacks. This approach could force Chinese developers accustomed to Nvidia’s CUDA to spend more time adapting their AI models which, in turn, could affect both training and inference. DeepSeek’s development of its next AI model, for example, has reportedly been delayed. The primary reason appears to be the company’s effort to run more of its AI training or inference on Huawei’s chips. The question is not whether Chinese companies can build chips—they clearly can. The question is whether and when they can match Nvidia’s combination of performance, software support, and trust from end-users. On that front, the jury’s still out. But one thing is certain: China no longer wants to play second fiddle in the world’s most important technology race.
spectrum.ieee.org
November 13, 2025 at 9:14 PM
How Do You Know Whether You Perceive Pain the Same as Others?
How much pain are you in on a scale from one to 10? This simple method is still the way pain is measured in doctors’ offices, clinics, and hospitals—but how do I know if my five out of 10 is the same as yours? A new, early-stage platform aims to more objectively measure and share our individual perception of pain. It measures brain activity in two people in order to understand how their experiences compare and recreate one person’s pain for the other. The platform was developed as a partnership between the large Tokyo-based telecommunications company NTT Docomo and startup PaMeLa, short for Pain Measurement Laboratory, in Osaka, Japan. It’s part of a project from Docomo called Feel Tech. “We are developing a human-augmentation platform designed to deepen mutual understanding between people,” a Docomo representative told __IEEE Spectrum__ by email. (Answers were originally provided in Japanese and translated by Docomo’s public relations.) “Previously, we focused on sharing movement, touch, and taste—senses that are inherently difficult to express and communicate. This time, our focus is on pain, another sense that is challenging to articulate.” Docomo demonstrated the platform last month at the Combined Exhibition of Advanced Technologies (CEATEC), Japan’s largest electronics trade show. ## How Shared Pain Perception Tech Works The system consists of three components: a pain-sensing device, a platform for estimating the difference in sensitivity, and a heat-based actuation device. First, the system uses electroencephalography (EEG) to measure brain waves and uses an AI model to “visualize” pain as a score between 0 and 100, for both the sender and receiver. The actuation device is then calibrated based on each person’s sensitivity, so a sensation transmitted to both people will feel the same. In this initial version, the platform works with thermally induced pain stimuli. “This method allows for precise adjustment and ensures safety during research and development,” Docomo says. PaMeLa also used thermal stimulation in its research on determining the intensity level of pain, which graded the pain stimulation data of 461 subjects with machine learning algorithms. However, the company says, pain from other sources can also be shared. Eventually, Docomo aims to convey many types of physical and even psychological pain, which will be an aim of future research. “We believe there are various possibilities for how pain can be captured and shared,” Docomo says. ## Finding a Use Case for Shared Pain Perception The technology is still at a very early stage, says Carl Saab, the founder and director of the Cleveland Clinic Consortium for Pain. Saab, who is also an adjunct professor at Brown University, researches pain biomarkers, including through EEG measurements and AI. For one thing, Saab says he’s not clear what the use case is for the platform. In terms of the science, he also notes that pain differs in healthy patients and those experiencing ongoing pain, such as chronic pain or migraine. “If you induce pain in a healthy volunteer versus somebody who’s a pain patient, the nature of the representation of pain in the brain is different,” Saab says. Healthy volunteers know that the pain will be temporary, he explains. But in real patients, chronic pain often comes with anxiety, depression, and sometime side effects from medication. In a study Saab conducted several years ago, for example, he induced pain by submerging volunteers’ arms in ice for an extended period. When he did the same with pain patients, the resulting brain activity was much more complex, and the signals weren’t so clear. Docomo says it plans to collaborate with hospitals in the future to verify the technology in medical settings. And in March, PaMeLa announced it completed a clinical trial that analyzed changes in EEG signals before and after administration of painkillers in patients receiving surgeries under general anesthesia. The startup is also investigating pain in other conditions, such as exercise, acute pain from injections, and chronic pain. “Pain is a multidimensional experience,” Saab says. “When you say you’re measuring someone’s pain, you always have to be careful about what kind of dimension you are measuring.”
spectrum.ieee.org
November 13, 2025 at 4:59 AM
The Complicated Reality of 3D Printed Prosthetics
Around ten years ago, fantastical media coverage of 3D printing dramatically increased expectations for the technology. A particular darling of that coverage was the use of 3D-printing for prosthetic limbs: For example, in 2015, _The New York Times_ celebrated the US $15 to $20 3D-printed prosthetic hands facilitated by the nonprofit E-nable, which paired hobbyist 3D printer owners with children with limb differences. The magic felt undeniable: disabled children could get cheap, freely accessible mechanical hands made by a neighbor with an unusual hobby. Similar stories about prosthetics abounded, painting a picture of an emerging high-tech utopia enabled by a technology straight out of _Star Trek_. But as so often happens, the Gartner Hype Cycle was in full force. By the mid-2010s, 3D-printing was in the “Peak of Inflated Expectations” phase, and prosthetics was no exception. Those LEGO-style hands getting media attention didn’t have the strength needed for a wearable device, the prints themselves had too many inaccuracies, and the designs were—as you may imagine an entirely plastic object to be—deeply uncomfortable. Quorum’s 3D-printed prostheses socket.Quorum The so-called “Trough of Disillusionment” followed. Joe Johnson, CEO of Quorum Prosthetics in Windsor, Colorado, saw prosthetists shy away from 3D printing technologies for years. Johnson stuck it out, though, waiting for technology and bureaucracy to catch up to his ambition. A milestone happened last year when U.S. medical insurers released an “L-code” last year specifically for adjustable sockets for prosthetic limbs. An L-code allows durable medical equipment—such as prosthetics—to be handled for billing within the U.S. insurance system. Quorum’s engineers responded with a sophisticated, adjustable socket utilizing 3D printing. Quorum’s design can adjust both volume and compression on residual limbs, making a better fit, like tightening your shoe laces. Despite its high-tech and sleek appearance, Johnson says his socket _could_ be made using traditional methods. But 3D printing makes it a “bit better and easier.” “When you look at overall cost of labor,” says Johnson, “it just keeps going up. To manufacture one of our sockets would take a technician 12 or 16 hours to make [using traditional methods].” Using 3D printing, he says “we can make five overnight.” As a result, Quorum spends less on technician labor. However, there are new costs. Quorum needs to pay for software subscriptions and licenses on top of the overhead required to operate a nearly one-million dollar Hewlett-Packard 3D printer. “We have to spend $50,000 on the A/C unit just to control the humidity,” says Johnson. At the end of the day, it costs over $1000 to print each socket, even when they print multiple sockets together. The costs are actually now higher than if Quorum didn’t use 3D printing to manufacture prostheses, but Johnson believes the quality is superior. “You can see more patients. [3D printing] is so precise and less adjustments need to be made.” This has meant fewer follow-up visits for patients and, for many, better fits. Operation Namaste is using 3D printing to standardized the liners for prosthetic limb sockets.Operation Namaste ## Why hasn’t 3D printing lowered costs? When I asked Jeff Erenstone, a prosthetist for over two decades and founder of prosthetic limb non-profit Operation Namaste, why 3D printed designs hadn’t lowered costs, he said Quorum is “able to make a socket that was not possible before 3D printing—very next level socket and sophistication. What they are making isn’t lowering costs any more than Ferrari is lowering costs. They are making the Ferrari of sockets.” But Erenstone says the technology is finally getting closer to achieving some of the things everyone imagined was possible ten years ago. Namely, the ability to share designs around the world and increase communication between practitioners has been life-changing. Ernestone set his sights on cracking the code around prosthetic liners—the silicone, flexible socks that prosthesis-users roll up onto their residual limb before inserting it into the prosthesis socket. Liners from one of the most common brands, Ossur, are sold for many hundreds of dollars each, but are vital for a prosthetic to be comfortable enough to wear all day. To bring high quality liners to prosthesis-users in low-resourced countries, Operation Namaste is standardizing the molds to make silicone liners. Clinicians anywhere in the world can print the mold using inexpensive 3D printers and about $22 in materials and local labor costs to produce a high-quality silicone liner. “3D printing has value in low income countries because accessibility is so much harder,” explains Erenstone. “I have not seen it [have as much value] in the urban areas where there is adequate prosthetic care.” 3D printing has been especially helpful in war zones such as Ukraine and Sudan, where it may be unsafe for prosthetists to visit from abroad and there are very few resources. Canada-based Victoria Hand Project identifies prosthetics and orthotics clinics around the world, sets them up with a 3D print lab, and trains the clinicians in 3D printing software. Where 3D printing has made a difference is increasing knowledge sharing between practitioners and increasing the availability of low-cost designs. It is unclear, however, whether prosthetics printed with cheaper 3D printers hold up compared to conventional time-tested, body-powered, low-cost designs. Quorum Prosthetics operates a nonprofit called One Leg at a Time in Tanzania, where they train local people in 3D scanning and measuring of residual limbs, but these scans are sent back to Colorado, where an industrial multi-jet fusion printer actually prints the hands. Local Tanzanians may be trained to use the new technology, but the best equipment to finish the task is still out of their reach. Unlimited Tomorrow’s TrueLimbUnlimited Tomorrow ## Can 3D-printed prosthetics be cheaper? The goal of using 3D printing to make prosthesis less expensive is still being pursued, but non-technical issues pose significant obstacles. Easton LaChapelle, founder of Unlimited Tomorrow, sought to leverage 3D printing—a technology he fell in love with as a teenager—to create a high-functioning, low-cost hand to rival the clunky multi-articulating prosthetic hands on the market. The result was the TrueLimb, a $7,000 prosthetic hand so intricate in its appearance it looks as if it was carved from wood. The TrueLimb was sold directly to consumers in an effort to bypass the headaches of medical insurance, but even at $7,000—about 1/10th the cost of other multi-articulating myoelectric hands—the hand proved too expensive for many. Customers approached LaChapelle and asked for them to take insurance. Unlimited Tomorrow then started working with prosthetists who had to decide between billing insurance companies for (for example) a German-made prosthetic hand for tens of thousands of dollars versus the TrueLimb. “Prosthetists were hesitant to work with us because our price point was so low, they couldn’t mark it up to what they are used to,” explains LaChapelle. “It doesn’t matter what the technology is in these circumstances. Unlimited Tomorrow could have produced the best device, but clinicians are like ‘why would I bill for a TrueLimb when I could bill a Bebionic?’” As a result, TrueLimb’s cost shot up. Soon enough, says LaChapelle, “We became exactly the problem we tried to solve. We were just another fancy arm that cost a bunch of money and for the consumer there was still an out of pocket expense.” LaChapelle decided it was unethical to continue this way and has put Unlimited Tomorrow “on pause.” In the meantime, he’s working on commercializing some of the innovations he and his team of engineers stumbled upon along the way, such as their haptic glove system, which they hope will take hold in virtual reality applications. “The US [prosthetics] market is not gonna change,” he says with dismay. With the profits from their glove, he hopes to focus on developing a “badass body-powered [prosthetic] device” to distribute through a nonprofit. The insurance companies are innovating, too, and not in a helpful way. While 3D printed devices now have official, codified L-codes that prosthetists across the US can bill, Joe Johnson says insurance companies don’t care about the benefits of 3D printed devices. “The lawyers have reached a level of sophistication of writing policy that they can write around mandates [that should guarantee coverage],” Johnson explains. “We have certain prosthetic mandates for coverage but the insurance companies have become very sophisticated. They’re betting on you giving up.” Insurance companies still refuse to cover even microprocessor-enabled knees, says Johnson, a technology which is going on twenty-five years old. He and his team entertained the possibility of trying to recycle microprocessor knees to increase their affordability to many patients. In a not-to-distant future, they imagined insurance companies would find new ways to thwart their efforts. Says Johnson: “They’d totally brick those knees.” _This article was supported by the IEEE Foundation and a John C. Taenzer fellowship grant._
spectrum.ieee.org
November 13, 2025 at 4:59 AM
Be a Force for Good On Giving Tuesday
Giving Tuesday, being held on 2 December this year, is a day globally dedicated to generosity and empowering individuals and organizations to transform people’s lives and communities. For this year’s event, IEEE and the IEEE Foundation invite members to invest in the organization’s charitable programs. The programs aim to inspire the next generation of engineers, provide sustainable energy to those in need, assist in emergency response efforts, and more. This Giving Tuesday, members have the opportunity to help amplify the technological breakthroughs and innovative programs that change lives globally. ## Double your impact The initial US $85,000 donated to the Giving Tuesday campaign will be matched by the IEEE Foundation, dollar for dollar, up to $170,000. Donors can direct their gift to the IEEE program they feel most connected to, or they can choose to direct their donation to the IEEE Foundation for efforts that: * Illuminate the possibilities of technology to address global challenges. * Educate the next generation of innovators and engineers. * Engage a wider audience to appreciate the impact of engineering. * Energize innovation by celebrating excellence. * Shape the destiny of the next generation. ## Help shine a light on your favorite program Donating money is not the only way to make an impact on IEEE’s Giving Tuesday. Here are some other opportunities. * Become a community fundraiser and help promote your favorite IEEE philanthropic program or the IEEE Foundation to your network by creating a personalized page on the IEEE Foundation website. Once your page is set up, you can share it on your social media profiles and email it to your friends, family, and professional contacts. * Share, like, and comment on Giving Tuesday posts on Facebook and LinkedIn leading up to and on the day. * Post an #Unselfie photo—a picture of yourself accompanied by why you support IEEE’s philanthropic programs—on social media using the hashtags #IEEEFoundation and #IEEEGivingTuesday. The Foundation provides a tool kit with social media templates and fundraising resources on its website. For updates, check the IEEE Foundation Giving Tuesday web page and follow the Foundation on Facebook and LinkedIn.
spectrum.ieee.org
November 11, 2025 at 7:53 PM
DARPA and Texas Bet $1.4 Billion on a Unique Foundry
A 1980s-era semiconductor fab in Austin, Texas, is getting a makeover. The Texas Institute for Electronics (TIE), as it’s called now, is tooling up to become the only advanced packaging plant in the world that is dedicated to 3D heterogenous integration (3DHI)—the stacking of chips made of multiple materials, both silicon and non-silicon. The fab is the infrastructure behind DARPA’s Next-Generation Microelectronics Manufacturing (NGMM) program. “NGMM is focused on a revolution in microelectronics through 3D heterogeneous integration,” said Michael Holmes, managing director of the program. Stacking two or more silicon chips inside the same package makes them act as if they are all one integrated circuit. It already powers some of the most advanced processors in the world. But DARPA predicts silicon-on-silicon stacking will result in no more than a 30-fold boost in performance over what’s possible with 2D integration. By contrast, doing it with a mix of materials—gallium nitride, silicon carbide, and other semiconductors—could deliver a 100-fold boost, Holmes told engineers and other interested parties at the program’s unofficial coming out party, the NGMM Summit, late last month. The new fab will make sure these unusual stacked chips are prototyped and manufactured in the United States. Startups, and there were many at the launch event, are looking for a place to prototype and begin manufacturing ideas that are too weird for anywhere else—and hopefully bypassing the lab-to-fab valley of death that claims many hardware startups. The state of Texas is contributing $552 million to stand up the fab and its programs, with DARPA contributing the remaining $840 million. After NGMM’s five-year mission is complete, the fab is expected to be a self-sustaining business. “We are, frankly, a startup,” said TIE CEO Dwayne LaBrake. “We have more runway than a typical startup, but we have to stand on our own.” ## Starting up a 3DHI Fab Getting to that point will take a lot of work, but the TIE foundry is off to a quick start. On a tour of the facility, _IEEE Spectrum_ saw multiple chip manufacturing and testing tools in various states of installation and met several engineers and technicians who had started within the last three months. TIE expects all the fab’s tools to be in place in the first quarter of 2026. Just as important as the tools themselves is the ability of foundry customers to use them in a predictable manufacturing process. That’s something that is particularly difficult to develop, TIE officials explained. At the most basic level, non-silicon wafers are often not the same size as each other. And they have different mechanical properties, meaning they expand and contract with temperature at different rates. Yet much of the fab’s work will be linking these chips together with micrometer precision. The first phase of getting that done is the development of what are called a process design kit and an assembly design kit. The former provides the rules that constrain semiconductor design at the fab. The latter, the assembly design kit, is the real heart of things, because it gives the rules for the 3D assembly and other advanced packaging. Next, TIE will refine those by way of three 3DHI projects, which NGMM is calling exemplars. These are a phased-array radar, an infrared imager called a focal plane array, and a compact power converter. Piloting those through production “gives us an initial roadmap… an on-ramp into tremendous innovation across a broader application space,” said Holmes. These three very different products are emblematic of how the fab will have to operate once it’s up and running. Executives described it as a “high-mix, low-volume” foundry, meaning it’s going to have to be good at doing many different things, but it’s not going to make a lot of any one thing. This is the opposite of most silicon foundries. A high-volume silicon foundry gets to run lots of similar test wafers through its process to work out the bugs. But TIE can’t do that, so instead it’s relying on AI—developed by Austin startup Sandbox Semiconductor—to help predict the outcome of tweaks to its processes. Along the way, NGMM will provide a number of research opportunities. “What we have with NGMM is a very rare opportunity,” said Ted Moise, a professor at UT Dallas and an IEEE Fellow. With NGMM, universities are planning to work on new thermal conductivity films, microfluidic cooling technology, understanding failure mechanisms in complex packages, and more. “NGMM is a weird program for DARPA,” admitted Whitney Mason, director of the agency’s Microsystems Technology Office. “It’s not our habit to stand up facilities that do manufacturing.” But “Keep Austin Weird” is the city’s unofficial motto, so maybe NGMM and TIE will prove a perfect fit.
spectrum.ieee.org
November 11, 2025 at 4:10 AM
Startup Using Nanotips and Naphthalene for New Satellite Thruster
It sounds like a NASA pipe dream: a new spacecraft thruster that’s up to 40 percent more power efficient than today’s. Even better, its fuel costs less than a thousandth as much and weighs an eighth of the mass. A startup called Orbital Arc claims it can make such a thruster. With this design, “we can go from a thruster that’s about a few inches across and several kilograms to a thruster on a chip that’s about an inch across and has the same thrust output, but weighs about an eighth as much,” the company’s founder, Jonathan Huffman, says. According to Orbital Arc the hardware would be small enough to fit on the smallest low-earth orbit satellites but generate enough power for an interplanetary mission. Such inexpensive thrust could bring meaningful savings for satellite operators hoping to dodge debris, or mission operators aiming to send probes to distant planets. The key to these innovations is a combination of cheap, readily available fuel, MEMS microfabrication, and a strong love of sci-fi. ## Designing a Better Thruster Thrusters generally work by creating and then expelling a plasma, pushing a spacecraft in the opposite direction. Inside ever-popular Hall thrusters, a magnetic field traps electrons in a tight, circular orbit. A noble gas—commonly xenon—drifts into a narrow channel where it collides with the circulating charge knocking off electrons and ionizing it into plasma. A high-voltage electric field then rockets the plasma out the exhaust. Orbital Arc’s technology looks a bit different and came about almost coincidentally. Huffman was a biotech consultant and self-described “sci-fi nerd” who, in his spare time, had been commissioned to design fictitious technology for a futuristic video game. He had to figure out how aircraft might maneuver 250 years from now to make the game controls realistic, and so he started researching state-of-the-art propulsion systems. He quickly came to understand a limitation of existing ion thrusters he thought could be improved upon within the coming centuries, and (spoiler alert) possibly sooner: if a mission requires more thrust, its thruster needs to be heavier. But crucially, “there’s a certain point at which adding more mass to the thruster negates all of the benefits you can get from extra thrust,” he says. So, to retain those benefits, thrusters need to be small but mighty. Huffman’s familiarity with biology labs gave him an unexpected edge when it came to propulsion design. Through his job, he learned about nanoscale tips—nozzles that emit ions—used to generate intense electromagnetic fields for biomedical research. They’re found in mass spectrometers, instruments that identify unknown chemicals by converting them into ions, accelerating them, and watching how they fly. He suspected that such a system could be miniaturized even more to make the ionization process in a thruster. After a year and a half of developing the concept, Huffman was convinced that his idea for a small thruster had potential beyond a video game. And he was right. Each Orbital Arc thruster has a chip at its heart with millions of micrometer-scale, positively charged tips embedded in it and channels to direct gas flow—naphthalene flows in, and ions flow out. As naphthalene molecules pass the charged tips, the molecules**** become polarized—here, that means a molecule’s electrons bunch up on one of its sides. Because of the uneven field created by the charge, the molecules get dragged towards a tip and are then trapped there, unable to escape until they release electrons. Once they release electrons, “you have an ion that’s at the point of a really sharp positively charged object, and it itself is now positively charged. So it accelerates,” Huffman explains. The repelled ions fly by and spray out into space, propelling the spacecraft forward. An advantage of this design is the power savings that come from avoiding the internal plasma generation that other thrusters rely on, Huffman says. “Plasmas have losses because everything’s in a big soup mixed together,” Huffman explains. Free electrons in a plasma can recombine with ions to produce neutral atoms “and now I’ve lost the energy that I put in to make that charged particle. It’s a waste of power.” Recent calculations show the naphthalene nanotip thruster providing a 30 to 40 percent improvement in power efficiency, he claims. By avoiding plasmas all together, the Orbital Arc design is able to capitalize on the power savings, as shown in a recent demonstration. In a recent test, just six of Orbital Arc’s tips were able to generate about three times more ion current than an array of 320,000 tips from a group from MIT, Huffman says. Two and a half years after his “aha” moment (and after “building the whole darn thing in Excel”), Huffman is the CEO of Orbital Arc, a startup testing four working prototypes of its tiny tips-on-chips. The thruster is not only innovative for its size, but also for its fuel. Naphthalene—the main ingredient of mothballs—is a readily available byproduct of oil refineries. The compound may smell bad, but it’s safe to handle and extremely cheap, Huffman says, costing around US $1.50 per kilogram compared to some $3,000 per kilogram for xenon. Orbital Arc’s use of naphthalene aids in their shrinking of product costs, which the company claims is at one percent of traditional Hall thrusters. “I think that’s believable,” says Jonathan MacArthur, a postdoctoral researcher at Princeton University’s Electric Propulsion and Plasma Dynamics Laboratory. “What remains to be seen is, okay, it’s cheap, but if I put diesel in my gas car because it’s on sale, that doesn’t necessarily bode well for the engine in my car.” He wishes the startup would release data to back up their cost claims—and while they’re at it, data to back up performance claims, as well. ## From Prototype to Flight For now, in the prototype stage, each chip contains only six tips, fabricated using MEMS manufacturing processes in a cleanroom at Oak Ridge National Laboratory. But the next step is to manufacture a full-scale version of the chip in a university lab, Huffman says. Then, the company will need to build the thruster that goes around the chip. “That’s a relatively simple device. It’s a valve, it’s a few wires, it’s a few structural components. Very, very straightforward,” Huffman claims. He says he’ll need to integrate all of those parts before running through vibration testing, radiation testing, thermal cycling, and other steps on the way to achieve flight qualification. “Two years from now, I can have a product that is sellable, probably.” Huffman thinks Orbital Arc’s initial customers would be small teams, like startups or research groups. He’s confident that they’ll be willing to try the new thrusters, despite the risks inherent to new technologies, because of the expected performance at low cost. “So some folks just won’t have any choice but to buy it, even if it hasn’t flown before. If they want to do the mission, they’re going to take the risk,” he says. Princeton’s MacArthur is skeptical of that claim. “When you’re choosing a propulsion system, generally data and heritage is everything.” He’s not so sure that customers will be willing to take on the risk of a new thruster without a history of flight. Still, some CubeSat-scale missions may agree to use new thrusters at a discount, suggests Oliver Jia-Richards, who studies in-space propulsion at the University of Michigan. Customers may also be willing to take a chance on Orbital Arc because other startups, like Enpulsion, have been recently successful with their new electric propulsion technology, he says. But “with this kind of thing, there’s always risks.” After targeting small missions, Huffman wants to “build something where we show off a bit.” He notes that, as of yet, no satellite has completed a round trip to the moon after a year in Earth’s orbit without refueling. It’s funding dependent and there may be more attractive opportunities that come up, “so we’ll see,” he says. And he’s not stopping there. “We are tapping into a mathematical reality,” Huffman says. “If you cut dry mass off of spacecraft, you gain exponential benefits in its performance because of the way the rocket equation works. You get exponentially penalized for extra dry mass.” By integrating Orbital Arc’s thrusters, he says, a mission could cut solar panel and power supply mass because it drive is more power efficient, cut tank mass because naphthalene doesn’t require a pressure vessel unlike xenon, and cut thruster mass itself. With these savings, “you go from flying one-way science missions to Mars to flying two-way human rated missions to Jupiter without refueling,” Huffman claims. So while the thruster is Orbital Arc’s first step, Huffman envisions an ultra-light spacecraft bus next—arriving long before the far-future era of the video game that inspired it.
spectrum.ieee.org
November 11, 2025 at 4:10 AM
IEEE WIE Podcast Focuses on Workplace Issues for Women in Tech
For anyone working in today’s rapidly evolving science, technology, engineering, and mathematics fields, visibility, authenticity, and connection are no longer optional; they are essential. But there is a lack of resources for STEM professionals, especially women, looking to express themselves fully, build meaningful networks, and lead with confidence. To help, IEEE Women in Engineering (WIE) recently launched a podcast series in which experts from around the world inspire and inform to ignite change. The series aims to amplify the diverse experiences of women from STEM fields. Through candid conversations and expert insights, the podcast goes beyond technical talks to explore the human side of innovation, navigating burnout, balancing career ambition with well-being, and building successful, sustainable careers. The series is a volunteer and staff-run initiative. “In the early days of planning, our vision was just a spark shared among passionate volunteers eager to shape each episode and guest experience,” says Geetika Tandon, cochair of the IEEE WIE podcast subcommittee. “Seeing our podcast grow from those first conversations into a vibrant reality has been truly rewarding. We can’t wait for it to expand further.” “I’m excited that we’ve brought the drawings on our whiteboard and day planners to life,” says Kelly Onu, who is also cochair. New episodes are released on the third Wednesday of each month. ## Navigating dual-career dynamics The podcast’s premier episode, “Moms Who Innovate,” which debuted in May, features candid conversations with two executive coaches, authors, and TEDx speakers. Adaeze Iloeje-Udeogalanya, is the founder of African Women in STEM, which provides education, mentoring, and networking opportunities. Cassie Leonard is a seasoned aerospace professional who founded ELMM Coaching. Leonard offers one-on-one advice for professionals looking to grow their career and achieve a better work-life balance. She authored __STEM Moms: Design, Build, and Test to Create the Work-Life of Your Dreams__, a book that guides women by drawing from her experiences as a working mother. Onu, who moderated the episode, spoke with Iloeje-Udeogalanya and Leonard about the ebb and flow of being a mother while building a career.__ Both guests described how their background as engineers shaped the way they approach motherhood and community. They emphasized the importance of creating a support system that makes the busier times of life more manageable. Leonard said she “engineered her neighborhood” and shares the responsibilities of dropping off children at school, babysitting after school, and other day-to-day tasks. “As the podcast series grows, our mission is to shine a spotlight on the real-life adventures (and occasional misadventures) of women in STEM. We want to share late-night brainstorms, coffee-fueled breakthroughs, and the moment when someone finally figures out how to unmute themselves on virtual meeting platforms.” **—Geetika Tandon** Innovation for moms isn’t only about professional success, the duo said, but also about designing the kind of community that helps them thrive. The June episode, “Global Perspectives on Women in STEM,” led by Tandon, offered practical strategies for navigating work-life-balance challenges. Together with guest Sanyogita Shamsunder, CTO of telecommunications company GeoLinks in San Francisco, Tandon explored different perspectives of women around the world. Rawan Alghamdi, a wireless communication researcher at the King Abdullah University of Science and Technology, in Saudi Arabia, and an IEEE graduate student member hosted August’s episode, “PIE Framework: Presence, Image, and Exposure for Professionals in STEM.” Alghamdi spoke with Jahnavi Brenner, an executive coach and former engineer, who explained the PIE model, which challenges the long-held belief that technical skills alone are enough to advance one’s career. Brenner said professionals must strategically build an authentic personal brand to dictate how they are perceived by colleagues and how visible they are within their networks and industry. She said it is especially vital for women and underrepresented groups, who often face systemic barriers to recognition and promotion. October’s episode, “Balancing Work and Life in STEM Careers,” tackled struggles parents face raising a family while working full time. It was moderated by Abinaya Inbamani, a mentor who has contributed to the successful deployment of IoT systems used for smart health care, renewable energy, and cybersecurity. She covered the intense logistics and emotional toll of balancing a demanding career with the responsibilities of parenthood. Listeners also learned time-management strategies and boundary-setting techniques, such as reframing guilt as a reminder of care and responsibility rather than failure; accepting that it’s all right to procrastinate occasionally rather than push through unhealthy stress; and organizing the day with clear boundaries between work and home. “We don’t have to do it all,” Inbamani said. “Sometimes balance is simply choosing what matters most in that moment.” ## What’s next for the podcast Upcoming episodes will focus on being present parents, setting boundaries in high-pressure environments, and redefining success on one’s own terms, Tandon and Onu say. In the works is an episode spotlighting tech trailblazer Nimisha Morkonda Gnanasekaran, who was recognized by the IEEE Computer Society as one of its Top 30 Early Career Professionals this year. She is the director of data science and advanced analytics at Western Digital, based in San Jose, Calif. Another episode, Tandon and Onu say, will feature a conversation with Cynthia Kane, author of __The__ __Pause Principle: How to Keep Your Cool in Tough Situations__, on navigating difficult workplace conversations without shutting down or losing one’s temper. The episode will tackle critical issues and career struggles women face, Tandon and Onu say. A study that found as many as 50 percent of women leave their STEM career within five years. ## Global reach and impact of the podcast IEEE WIE is seeing the impact the podcast is having on listeners. Several say they tune in not just for advice but also to connect with others. Others say the podcast makes them feel they are not alone in their challenges or career aspirations. The majority of listeners are in Canada, India, Japan, Saudi Arabia, Türkiye, and the United States. Onu says she hopes the audience expands to include more countries. “I hope this podcast hops across continents, sneaks into earbuds everywhere, and becomes a trusty sidekick in women’s STEM journeys—cheering them on as they conquer equations, break barriers, and maybe even invent a robot that makes perfect coffee,” Tandon says. “As the podcast series grows, our mission is to shine a spotlight on the real-life adventures (and occasional misadventures) of women in STEM. We want to share late-night brainstorms, coffee-fueled breakthroughs, and the moment when someone finally figures out how to unmute themselves on virtual meeting platforms.” Through personal tales, inspiring journeys, and a parade of trailblazing leaders who have tackled obstacles, IEEE WIE is celebrating the grit, wit, and brilliance of women in STEM. Whether you’re a student just beginning your STEM journey, a mid-career professional seeking clarity, or a leader looking to give back to your profession, the podcast offers a space to learn, reflect, and rise together.
spectrum.ieee.org
November 8, 2025 at 1:49 AM
Video Friday: This Drone Drives and Flies—Seamlessly
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at _IEEE Spectrum_ robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ##### ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! > _Unlike existing hybrid designs, Duawlfin eliminates the need for additional actuators or propeller-driven ground propulsion by leveraging only its standard quadrotor motors and introducing a differential drivetrain with one-way bearings. The seamless transitions between aerial and ground modes further underscore the practicality and effectiveness of our approach for applications like urban logistics and indoor navigation._ [HiPeR Lab ] I appreciate the softness of NEO’s design, but those fingers look awfully fragile. [1X ] > _Imagine reaching into your backpack to find your keys. Your eyes guide your hand to the opening, but once inside, you rely almost entirely on touch to distinguish your keys from your wallet, phone, and other items. This seamless transition between sensory modalities (knowing when to rely on vision versus touch) is something humans do effortlessly but robots struggle with. The challenge isn’t just about having multiple sensors. Modern robots are equipped with cameras, tactile sensors, depth sensors, and more. The real problem is **how to integrate these different sensory streams**, especially when some sensors provide sparse but critical information at key moments. Our solution comes from rethinking how we combine modalities. Instead of forcing all sensors through a single network, we train separate expert policies for each modality and learn how to combine their action predictions at the policy level._ Multi-university Collaboration presented via [GitHub ] Thanks, Haonan! Happy (somewhat late) Halloween from Pollen Robotics! [Pollen Robotics ] > _In collaboration with our colleagues from Iowa State and University of Georgia, we have put our pipe-crawlingworm robot to test in the field. See it crawls through corrugated drainage pipes in a stream, and a smooth section of a subsurface drainage system._ Paper ] from [ [Smart Microsystems Laboratory, Michigan State University ] > _Heterogeneous robot teams operating in realistic settings often must accomplish complex missions requiring collaboration and adaptation to information acquired online. Because robot teams frequently operate in unstructured environments — uncertain, open-world settings without prior maps — subtasks must be grounded in robot capabilities and the physical world. We present SPINE-HT, a framework that addresses these limitations by grounding the reasoning abilities of LLMs in the context of aheterogeneous robot team through a three-stage process. In real-world experiments with a Clearpath Jackal, a Clearpath Husky, a Boston Dynamics Spot, and a high-altitude UAV, our method achieves an 87% success rate in missions requiring reasoning about robot capabilities and refining subtasks with online feedback._ SPINE-HT ] from [ [GRASP Lab, University of Pennsylvania ] Astribot keeping itself busy at IROS 2025. [Astribot ] > _In two papers published in_ Matter _and Advanced Science, a team of scientists from the Physical Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, developed control strategies for influencing the motion of self-propelling oil droplets. These oil droplets mimic single-celled microorganisms and can autonomously solve a complex maze by following chemical gradients. However, it is very challenging to integrate external perturbation and use these droplets in robotics. To address these challenges, the team developed magnetic droplets that still possess life-like properties and can be controlled by external magnetic fields. In their work, the researchers showed that they are able to guide the droplet’s motion and use them in microrobotic applications such as cargo transportation._ [Max Planck Institute ] > _Everyone has fantasized about having an embodied avatar! Full-body teleoperation and full-body data acquisition platform is waiting for you to try it out!_ [Unitree ] It’s not a humanoid, but it right now safely does useful things and probably doesn’t cost all that much to buy or run. [Naver Labs ] > _This paper presents a curriculum-based reinforcement learning framework for training precise and high-performance jumping policies for the robot `Olympus’. Separate policies are developed for vertical and horizontal jumps, leveraging a simple yet effective strategy. Experimental validation demonstrates horizontal jumps up to 1.25 m with centimeter accuracy and vertical jumps up to 1.0 m. Additionally, we show that with only minor modifications, the proposed method can be used to learn omnidirectional jumping._ Paper ] from [ [Autonomous Robots Lab, Norwegian University of Science and Technology ] > _Heavy payloads are no problem for it: The new KR TITAN ultra moves payloads of up to 1500 kg, making the heavy lifting extreme in the KUKA portfolio._ [Kuka ] Good luck getting all of the sand out of that robot. Perhaps a nice oil bath is in order? [DEEP Robotics ] This CMU RI Seminar is from Yuke Zhu at University of Texas at Austin, on “Toward Generalist Humanoid Robots: Recent Advances, Opportunities, and Challenges.” > _In an era of rapid AI progress, leveraging accelerated computing and big data has unlocked new possibilities to develop generalist AI models. As AI systems like ChatGPT showcase remarkable performance in the digital realm, we are compelled to ask: Can we achieve similar breakthroughs in the physical world — to create generalist humanoid robots capable of performing everyday tasks? In this talk, I will outline our data-centric research principles and approaches for building general-purpose robot autonomy in the open world. I will present our recent work leveraging real-world, synthetic, and web data to train foundation models for humanoid robots. Furthermore, I will discuss the opportunities and challenges of building the next generation of intelligent robots._ [Carnegie Mellon University Robotics Institute ]
spectrum.ieee.org
November 8, 2025 at 1:49 AM