Technoholic.me
banner
technoholic.bsky.social
Technoholic.me
@technoholic.bsky.social
We bring you the 🅻🅰🆃🅴🆂🆃 🆃🅴🅲🅷 🅽🅴🆆🆂 from around the world.
ⒷⒾⓄ : http://links.page/technoholic
ⓃⒺⓌⓈⓁⒺⓉⓉⒺⓇ : http://sendfox.com/lp/1y6xxd
NLP (Natural Language Processing) is a branch of AI that enables computers to understand, interpret, and generate human language. It powers chatbots, translation, sentiment analysis, and more! #NLP #AI #LanguageTech
December 19, 2025 at 12:44 PM
The Disney-OpenAI Deal Redefines the AI Copyright War
The Disney-OpenAI Deal Redefines the AI Copyright War
Disney is hedging against the future. OpenAI is clearing a path for Sora. And together they’ve made a blueprint for how AI and Hollywood can move forward.
www.wired.com
December 19, 2025 at 7:10 AM
In 1844, Chess Was Already Online
In 1844, Chess Was Already Online
On 18 November 1844, the Washington Chess Club challenged its counterparts in Baltimore to a match. Two teams were organized, and at 4 p.m. on 26 November, the first game commenced with three consulting members to a side. Washington began conventionally, pushing a pawn to the center of the board. Baltimore immediately responded by mirroring the move. But this was unlike any chess game ever played before. The Baltimoreans were still in Baltimore, the Washingtonians were still in Washington, D.C, 60 kilometers away, and they were playing by electrical telegraph. Successive moves were transmitted over the new Baltimore–Washington telegraph line, the first in the United States, which Samuel Morse and company had inaugurated in May of that year with the message “What hath God wrought.” Samuel F.B. Morse pushed for the first U.S. telegraph, which connected Washington, D.C., to Baltimore, Md.Mathew B. Brady/Library of Congress One chess game led to another, and play continued on and off for days. Records of the games are incomplete and sometimes inconsistent—181 years later, it’s unclear who exactly dreamt up chess over wire and why. But thanks in part to historical documents at the Smithsonian Institution, we know enough about the people involved and the operation of the early telegraph to have a sense of the proceedings. We know that Morse would cite chess in lobbying Congress to fund the extension of the telegraphic network to New York via Philadelphia. And we know that there was much more chess by telegraph to come. Not simply a novelty or a one-off tech demo, telegraph chess eventually became a well-known, joked-about trend in the United States and Britain, writes historian Simone Müller-Pohl. Chess by telegraph also prefigured chess played through other means of telecommunications. There are records of recreational and serious games played over radio, on telephone lines, satellite, and through online interfaces including forums, email, and dedicated live services. Most recently, chess has evolved into an esport. Earlier this year, chess joined the likes of Call of Duty, Street Fighter, and Rocket League at the 2025 Esports World Cup in Riyadh, Saudi Arabia. Last August, chess grandmaster Magnus Carlsen won the first ever Chess Esports World Cup.Esports World Cup The number of adults worldwide who play chess regularly is often estimated at around 600 million, and many of them use whatever means available to play games across long distances with friends, rivals, and strangers. Indeed, the 1,500-year-old game and the latest in telecommunications always seem to find each other, starting just months after the first telegraph was built in the United States, when chess went electric.The Birth of Chess by Telegraph The Baltimore–Washington telegraph was financed in 1843 with US $30,000 (about $1.3 million today) appropriated by Congress, with the help of Morse’s business partner, Francis O.J. Smith, who had supported the project in 1838 while still a sitting congressman from Maine. By late 1844, a bill to extend the line to New York was in front of the U.S. House of Representatives. In at least one way, drawing the attention of legislators to the new line was relatively easy—the Washington end moved back and forth between the Capitol building and the post office, near the present-day National Portrait Gallery. If you were a lawmaker in Washington at the time, the telegraph would’ve been hard to miss. But perhaps they needed more persuading. Orrin S. Wood, a telegraph operator, thought so. On 5 December 1844, Wood wrote a letter to his brother-in-law, engineer Ezra Cornell, who had worked on the line and would go on to cofound Western Union: “We have had considerable excitement in playing chess between this place and Baltimore for the last 2 or 3 weeks.…I am inclined to think that Congress will do something for Prof Morse as very many of them appear to be very much interested with [chess].” A week later, Morse wrote to George M. Bibb, Secretary of the Treasury, to lobby for the funding. The telegraph could relay congressional news, presidential convention results, or the whereabouts of wanted criminals, he argued. He also played the chess card: On 18 November 1844, the Washington Chess Club challenged Baltimore to a game of telegraph chess. One telegraph operator asked his counterpart, “Are you tired of checkers?”Smithsonian Institution Archives “To show the variety of the operations of the telegraph, a game of draughts [checkers], and several games of chess, have been played between the cities of Baltimore and Washington, with the same ease as if the players were seated at the same table.” Chess had even been played on rainy nights, he noted. The telegraph’s continued operation in both inclement weather and darkness compared favorably with optical telegraphy. Such systems, popular in France, consisted of regularly spaced flag towers that relayed messages by semaphore; they were costly to build and operate and only worked in daylight and good weather. While Morse played up people’s interest in telegraphic chess, the game itself didn’t obviously begin with promotional intent. It began with checkers. We know this because Morse’s associate Alfred Vail kept a “Journal of the Magnetic Telegraph between Washington and Baltimore” (now part of the Vail Telegraph Collection at the Smithsonian) in which he meticulously recorded messages sent over the wire for posterity. Notes from a 26 November 1844 chess game record players’ moves, as well as other snippets of information, such as “I sent tea for Mr. Vail by 5 o’clock train.”Smithsonian Institution Archives On 15 November 1844, Vail in Washington instructed Henry J. Rogers in Baltimore to “get a checkerboard and let us play a game tomorrow morning.” Vail promised to send instructions by regular mail on the five o’clock train. At first confused, Rogers came up with the idea of using numbered squares to communicate locations on the board. Later that day, Rogers announced that John Wills, a journalist with the Baltimore Patriot, would play in his place. The next morning, before the checkers game began, Vail recorded a telegraphic exchange between himself and Rogers, in which Vail suggests the game is for private enjoyment, and he would prefer that Wills—a reporter—not write about it: Do you think the game any advantage R What game V Checkers R Amusement V Don’t you think people will make remarks R Not if it is done by ourselves V yes have you any objections to Wills R none if he does not publish it V yes R Wills was thoroughly impressed with the technology, calling it “another wonder of the age,” according to Rogers. And so the telegraphers agreed that he could publish an account of the game, which perhaps was Vail’s hope all along. The story was still being prepared for publication on 18 November when Vail tapped, “The Washington Chess Club challenge Baltimore to a game.”How about a Nice Game of Chess? Vail’s 1845 book about the telegraph includes a brief report on chess. He writes that in the Washington–Baltimore match, seven games were played, totalling 686 moves “transmitted without a single mistake or interruption.” These details reappear in The Book of the First American Chess Congress, which called the Baltimore–Washington games the first telegraphic chess match. Alfred Vail, Morse’s associate, was instrumental in organizing the first telegraph chess match and kept detailed notes on messages sent over the line.Zoom Historical/AlamyHow did the games actually unfold? While many of the rules and conventions of chess would be familiar to modern chess players, the games were unusual in other ways. At the time, the standard way of describing chess moves was descriptive notation, says John McCrary, a former president of the United States Chess Federation who maintains a collection of some 100 chess books from before 1900. For instance, “pawn to queen’s bishop’s four” described moving a pawn in front of the bishop on the queen’s side of the board to the fourth row from the bottom. Before the electrical telegraph, such descriptions would have been used in correspondence chess, played by mail. And The Oxford Companion to Chess (1984) describes a proposed 1823 match between France and England that intended to use semaphore telegraph, although the notation used was either never planned or has been lost to time. But Vail and Rogers used a system that assigned a unique number to each of the 64 squares. So “pawn to queen’s bishop’s four” would have been rendered as “11 to 27.” Though the game itself can be remarkably complex, that system allowed individual moves to be communicated simply. “The exchange of information in chess is relatively low,” says David Kazdan, an engineer at Case Western Reserve University who has recently overseen a renewed collaboration between the school’s radio and chess clubs. “You don’t need much of a communication channel to play chess.” To represent the positions on their telegraph chess board, Alfred Vail and Henry Rogers assigned a unique number to each of the 64 squares.Smithsonian Institution Archives Vail’s book logs the moves for two of the chess games, and both accounts include an illegal move—probably errors that were introduced later. The accounts in Vail’s telegraphic journal, on the other hand, appear accurate, and even include a real-time correction of one move. In the first game of telegraph chess, White was defeated.Google Books In Vail’s journal, Washington claims the white pieces, but close examination shows that Washington either played the first move as black, or the board was mirrored left to right. At the time, the white pieces did not always take the first move. The sides also agreed to a limit of 10 minutes per move, even though time controls weren’t common in chess, and the chess clock had not yet been invented. And while Vail wanted “first rate players,” McCrary calls the overall play weak, with a poor understanding of the long-term planning needed to coordinate all of the pieces. Both teams also made tactical errors. For example, in the second game, Washington overlooked that one of their pawns was overworked defending two other pieces simultaneously, watched as Baltimore captured one of the pieces, and elected not to retaliate in order to continue defending a more valuable knight. “Even with changing conventions of that time, what was there in the description was atypical,” says McCrary.RELATED: This 1920 Chess Automaton Was Wired to Win The teams took a break during the first game and then reconvened on 28 November. With a pawn in position to advance to the last row, where it would be “promoted”—that is, replaced by a more dangerous piece of the player’s choice—Baltimore swept in with its queen and readied checkmate in one move. Unable to salvage the game, Washington resigned. “Ha ha,” wrote Rogers. “Ha ha,” responded Vail. There is no record of overall standings, and no winner was declared between the two cities after all games had been played.The Tech of Telegraph Chess By today’s standards, the hardware that relayed the moves was relatively simple, mainly consisting of a battery, a switch, and a magnet. “It’s not all that different from a doorbell,” says David Hochfelder, a historian at the State University of New York at Albany who has studied the early American telegraph. Laying the line between the two cities had been difficult, with costly delays after failed attempts to bury the cable and to use cheaper noninsulated wire. Eventually, overhead insulated copper wire was strung the distance between poles. On 24 May 1844, this telegraph register received the first message sent by telegraph: “What hath God wrought.”AP Years before the chess match, Morse had considered a messaging system that used only numbers, which corresponded to set words or phrases listed in a code book. But he soon realized that a practical communications service would need an alphabetic component to spell out proper names. This led to Morse’s eponymous code, which assigns a series of short and long signals to different alphanumeric characters. By tapping on a key, telegraph operators would interrupt a battery-powered current that ran the length of the telegraph wire. At the other end, an electromagnet moved a stylus, pen, or pencil, to mark a piece of paper with the corresponding dots and dashes, which an operator would then read. (The sounder, which turned the signals into audible sounds, hadn’t yet been invented.)Related: Morse Code’s Vanquished Competitor: The Dial Telegraph During the chess games, the telegraph operators occasionally asked each other how many people were in the room. At times, a dozen kibitzers looked on. At others, only the rotating cast of chess players and telegraph operators was present.Telegraph Chess Moves On The Baltimore–Washington telegraph line was an immediate hit with a general public that embraced popular science through lectures and popular books and magazines. Scientific American was founded in 1845, for example. But people were more curious to see the telegraph at work than they were to use its services, even though the line operated free of charge for the first year. “Operators tended to show its capabilities rather than handling actual message traffic,” says Hochfelder. The lack of activity is sometimes evident in the telegraph journal. Many of the messages are purely functional (“I am ready,” “stop 30 minutes”); simple greetings; notifications of letters sent and received; or requests for daily newspapers. The Baltimore end of the telegraph was in the Mt. Clare station of the B&O railroad, and the telegraph line ran alongside the tracks. Mail delivered by train took half a day door to door, says Hochfelder, and the telegraph offered little practical advantage. On 5 December 1844, Rogers wrote to Vail: “I hear from several sources that we are making rather an unfavorable impression with the religious part of the community, and I am under the impression if we continue after the present party is through that we will be injured more than any benefit might or can be derived from it.” The exact nature of the religious community’s complaint with telegraph chess is unclear. Although Morse wrote to Vail on the day of the first chess game that he “was much pleased with your game of drafts,” he came to feel that chess was too frivolous for the telegraph, as noted by the chess writer Tim Harding in his Correspondence Chess in Britain and Ireland, 1824–1987 (McFarland, 2011). Whatever the reasons, it appears that after 17 December 1844, no more chess was played on the line. And in the end, Congress didn’t fund a telegraphic connection to New York, nor did it acquire perpetual rights to the telegraph, in part because Morse’s business partner had other designs, says Hochfelder. The Baltimore–Washington line operated under the auspices of the Postal Service from 1845 to 1847, when funding ended. When U.S. chess grandmaster Bobby Fischer was prevented from attending an international tournament in Havana, his moves were relayed via teletype. Left: Everett Collection Historical/Alamy; Right: Smith Archive/Alamy After that, the U.S. telegraph thrived in private ventures. Over the next few years, companies built local lines and networks to connect cities across the country. Most notably, Ezra Cornell’s Western Union completed a transcontinental telegraph in 1861, and eventually became a monopoly in the United States. Ordinary people rarely used the telegraph, says Hochfelder, but it transformed industries such as finance and journalism. Meanwhile, telegraph chess was taken up elsewhere. In 1845, for example, a match between London and Gosport, England, involved inventor Charles Wheatstone and chess master Howard Staunton. But it would take another few decades for telechess to become more widespread, with prominent club matches played over telegraph from the 1890s into the 1920s. High-level chess competitions tend to be held in person, but games have been played remotely from time to time. For example, in 1965, U.S. grandmaster Bobby Fischer relayed his moves by teletype over telephone lines from New York City to Havana, after the U.S. State Department prevented his attending a tournament there. And in 1999, a couple of years after losing a rematch to the IBM supercomputer Deep Blue, world champion Garry Kasparov played a promotional game against a team representing “the world,” which consulted on moves via a Microsoft forum. In a promotional game in 1999, Russian grandmaster Garry Kasparov played an online game against “the world.” Jeff Christensen/Getty Images Today, the internet has taken telecom chess to fabulous new heights, with one site alone, chess.com, often hosting up to 20 million games a day. Indeed, the growth in online play has sometimes stretched the capacities of the servers and the engineers who maintain them. Why have technologists taken the opportunity to play chess using so many generations of telecommunications? It may simply be that chess is popular, and by its nature can actually be played with short messages and perfect information, unlike soccer or poker. But is there something more, maybe a natural affinity? “There are similarities in thinking processes [between] engineering design, and the sort of puzzle solving that a chess game involves,” says Kazdan of Case Western Reserve. The connection may be one-sided. “Many engineers like chess. I’m not sure many chess players like engineering.”
spectrum.ieee.org
December 19, 2025 at 1:20 AM
Create engaging videos faster with Mootion using AI-generated storyboards and edits from text or audio, perfect for creators, educators, and marketers to save time and boost content quality. #VideoCreation #AI
Complete Guide to Using Mootion for Engaging Multimedia Content
Unlock the potential of Mootion with this comprehensive guide, featuring tips and techniques for creating captivating multimedia content that engages audiences.
technoholic.me
December 17, 2025 at 3:35 PM
Your peace of mind is priceless. Don't settle for a toxic workplace. Surround yourself with positivity and growth!

#ToxicFree #WorkplaceWellness #PositiveVibes #Motivation
December 17, 2025 at 12:44 PM
Cursor Launches an AI Coding Tool For Designers
Cursor Launches an AI Coding Tool For Designers
The 300-person startup hopes bringing designers aboard will give it an edge in an increasingly competitive AI software market.
www.wired.com
December 16, 2025 at 10:26 PM
India's GCC hiring is booming, growing 4x faster than IT services, with 160,000 jobs expected in FY25 fueled by AI, cloud, and cybersecurity.
December 16, 2025 at 2:10 PM
I can create, preview, schedule, and analyze all social media posts using Publer. 🦸‍♂️

The intuitive dashboard allows me to curate posts in detail & collaborate smoothly with the entire workspace.

Try it today for FREE!

publer.io/technoholic
December 16, 2025 at 12:40 PM
Intel Takes Major Step in Plan to Acquire Chip Startup SambaNova
Intel Takes Major Step in Plan to Acquire Chip Startup SambaNova
The two chip companies have signed a term sheet, according to sources with direct knowledge of the agreement.
www.wired.com
December 15, 2025 at 10:05 AM
Two New AI Ethics Certifications Available from IEEE
Two New AI Ethics Certifications Available from IEEE
It appears that nearly every organization is planning to use artificial intelligence to improve operations. Although autonomous intelligent systems (AIS) can offer significant benefits, they also can be used unethically. The technology can create deepfakes, realistic-looking altered images and videos that help spread misinformation and disinformation. Meanwhile, AI systems trained on biased data can perpetuate discrimination in hiring, lending, and other practices. And surveillance systems that incorporate AI can lead to misidentification. Those issues have led to concerns about AIS trustworthiness, and it has become more crucial for AI developers and companies to ensure the systems they use and sell are ethically sound. To help them, the IEEE Standards Association (IEEE SA) launched its IEEE CertifAIEd ethics program, which offers two certifications: one for individuals and one for products. IEEE CertifAIEd was developed by an interdisciplinary group of AI ethics experts. The program is based on IEEE’s AI ethics framework and methodology, centered around the pillars of accountability, privacy, transparency, and avoiding bias. The program incorporates criteria outlined in the AI ontological specifications released under Creative Commons licenses. IEEE is the only international organization that offers the programs, says Jon Labrador, director for conformity assessment of IEEE SA programs.Assessment program details The professional certification provides individuals with the skills to assess an AIS for adherence to IEEE’s methodology and ethics framework. Those with at least one year of experience in the use of AI tools or systems in their organization’s business processes or work functions are eligible to apply for the certification. You don’t have to be a developer or engineer to benefit from the training, Labrador says. Insurance underwriters, policymakers, human resources personnel, and others could benefit from it, he says. “Professionals from just about any industry or any company that’s using an AI tool to process business transactions are eligible for this program,” he says. The training program covers how to ensure that AI systems are open and understandable; identify and mitigate biases in algorithms; and protect personal data. The curriculum includes use cases. Courses are available in virtual, in-person, or self-study formats. Learners must take a final exam. Once they’ve successfully passed the test, they’ll receive their three-year IEEE professional certification, which is globally recognized, accepted, and respected, Labrador says. “With the certification, you’ll become a trusted source for reviewing AI tools used in your business processes, and you’ll be qualified to run an assessment,” he says. “It would be incumbent on a company to have a few IEEE CertifAIEd professionals to review its tools regularly to make sure they conform with the values identified in our program.” The self-study exam preparatory course is available to IEEE members at US $599; it costs $699 for nonmembers.Product assessments The product certification program assesses whether an organization’s AI tool or AIS conforms to the IEEE framework and continuously aligns with legal and regulatory principles such as the European Union AI Act. An IEEE CertifiAIEd assessor evaluates the product to ensure it meets all criteria. There are more than 300 authorized assessors. Upon completion of the assessment, the company submits it to IEEE Conformity Assessment, which certifies the product and issues the certification mark. “That mark lets customers know that the company has gone through the rigors and is 100 percent in conformance with the latest IEEE AI ethics specifications,” Labrador says. “The IEEE CertifiAIEd program can also be viewed as a risk mitigation tool for companies,” he says, “reducing the risk of system or process failures with the introduction of a new AI tool or system in established business processes.” You can complete an application to begin the process of getting your product certified.
spectrum.ieee.org
December 15, 2025 at 1:20 AM
Make engaging videos faster with Mootion leveraging AI-generated storyboards and editing from text or audio. Perfect for content creators, teachers, and marketers to reduce production time and enhance content quality.
Complete Guide to Using Mootion for Engaging Multimedia Content
Unlock the potential of Mootion with this comprehensive guide, featuring tips and techniques for creating captivating multimedia content that engages audiences.
technoholic.me
December 14, 2025 at 3:47 PM
Make engaging videos more quickly with Mootion leveraging AI-generated storyboards and editing from text or audio. Perfect for content creators, teachers, and marketers to reduce production time and enhance content quality. #VideoCreation #AI
Mootion | Turn your ideas into visual stories
Mootion is an AI-native content creation platform, on a mission to unlock creativity in the digital realm for everyone, transforming professional workflows into accessible, universal processes. Mootion aims to build an AI-driven creative hub encompassing 3D, video, animation, gaming, and more, becoming a platform that inspires creativity, fosters sharing, and facilitates collaboration for all.
www.mootion.com
December 14, 2025 at 3:47 PM
This Low-Cost Stopgap Tech Can Fix the Grid
This Low-Cost Stopgap Tech Can Fix the Grid
The power surging through transmission lines over the iconic stone walls of England’s northern countryside is pushing the United Kingdom’s grid to its limits. To the north, Scottish wind farms have doubled their output over the past decade. In the south, where electricity demand is heaviest, electrification and new data centers promise to draw more power, but new generation is falling short. Construction on a new 3,280-megawatt nuclear power plant west of London lags years behind schedule. The result is a lopsided flow of power that’s maxing out transmission corridors from the Highlands to London. That grid strain won’t ease any time soon. New lines linking Scotland to southern England are at least three to four years from operation, and at risk of further delays from fierce local opposition. At the same time, U.K. Prime Minister Keir Starmer is bent on installing even more wind power and slashing fossil-fuel generation by 2030. His Labour government says low-carbon power is cheaper and more secure than natural gas, much of which comes from Norway via the world’s longest underwater gas pipeline and is vulnerable to disruption and sabotage. The lack of transmission lines available to move power flowing south from Scottish wind farms has caused grid congestion in England. To better manage it, the U.K. has installed SmartValves at three substations in northern England—Penwortham, Harker, and Saltholme—and is constructing a fourth at South Shields. Chris Philpot The U.K.’s resulting grid congestion prevents transmission operators from delivering some of their cleanest, cheapest generation to all of the consumers who want it. Congestion is a perennial problem whenever power consumption is on the rise. It pushes circuits to their thermal limits and creates grid stability or security constraints. With congestion relief needed now, the U.K.’s grid operators are getting creative, rapidly tapping new cable designs and innovations in power electronics to squeeze more power through existing transmission corridors. These grid-enhancing technologies, or GETs, present a low-cost way to bridge the gap until new lines can be built. “GETs allow us to operate the system harder before an investment arrives, and they save a s***load of money,” says Julian Leslie, chief engineer and director of strategic energy planning at the National Energy System Operator (NESO), the Warwick-based agency that directs U.K. energy markets and infrastructure. Transmission lines running across England’s countryside are maxed out, creating bottlenecks in the grid that prevent some carbon-free power from reaching customers. Vincent Lowe/Alamy The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment. Such innovation involves some risk, because an intervention anywhere on the U.K.’s tightly meshed power system can have system-wide impacts. (Grid operators elsewhere are choosing to start with GETs at their systems’ periphery—where there’s less impact if something goes wrong.) The question is how far—and how fast—the U.K.’s grid operators can push GETs capabilities. The new technologies still have a limited track record, so operators are cautiously feeling their way toward heavier investment. Power system experts also have unanswered questions about these advanced grid capabilities. For example, will they create more complexity than grid operators can manage in real time? Might feedback between different devices destabilize the grid? There is no consensus yet as to how to even screen for such risks, let alone protect against them, says Robin Preece, professor in future power systems at the University of Manchester, in England. “We’re at the start of establishing that now, but we’re building at the same time. So it’s kind of this race between the necessity to get this technology installed as quickly as possible, and our ability to fully understand what’s happening.”How is the U.K. Managing Grid Congestion? One of the most innovative and high-stakes tricks in the U.K.’s toolbox employs electronic power-flow controllers, devices that shift electricity from jammed circuits to those with spare capacity. These devices have been able to finesse enough additional wind power through grid bottlenecks to replace an entire gas-fired generator. Installed in northern England four years ago by Smart Wires, based in Durham, N.C., these SmartValves are expected to help even more as NESO installs more of them and masters their capabilities. Warwick-based National Grid Electricity Transmission, the grid operator for England and Wales, is adding SmartValves and also replacing several thousand kilometers of overhead wire with advanced conductors that can carry more current. And it’s using a technique called dynamic line rating, whereby sensors and models work together to predict when weather conditions will allow lines to carry extra current. Other kinds of GETs are also being used globally. Advanced conductors are the most widely deployed. Dynamic line rating is increasingly common in European countries, and U.S. utilities are beginning to take it seriously. Europe also leads the world in topology-optimization software, which reconfigures power routes to alleviate congestion, and advanced power-flow-control devices like SmartValves. Engineers install dynamic line rating technology from the Boston-based company LineVision on National Grid’s transmission network. National Grid Electricity Transmission SmartValves’ chops stand out at the Penwortham substation in Lancashire, England, one of two National Grid sites where the device made its U.K. debut in 2021. Penwortham substation is a major transmission hub, whose spokes desperately need congestion relief. Auditory evidence of heavy power flows was clear during my visit to the substation, which buzzes loudly. The sound is due to the electromechanical stresses on the substation’s massive transformers, explains my guide, National Grid commissioned engineer Paul Lloyd. Penwortham’s transformers, circuits, and protective relays are spread over 15 hectares, sandwiched between pastureland and suburban homes near Preston, a small city north of Manchester. Power arrives from the north on two pairs of 400-kilovolt AC lines, and most of it exits southward via 400-kV and 275-kV double-circuit wires. Transmission lines lead to the congested Penwortham substation, which has become a test-bed for GETs such as SmartValves and dynamic line rating. Peter Fairley What makes the substation a strategic test-bed for GETs is its position just north of the U.K. grid’s biggest bottleneck, known as Boundary B7a, which runs east to west across the island. Nine circuits traverse the B7a: the four AC lines headed south from Penwortham, four AC lines closer to Yorkshire’s North Sea coast, and a high-voltage direct-current (HVDC) link offshore. In theory, those circuits can collectively carry 13.6 gigawatts across the B7a. But NESO caps its flow at several gigawatts lower to ensure that no circuits overload if any two lines turn off. Such limits are necessary for grid reliability, but they are leaving terawatt-hours of wind power stranded in Scotland and increasing consumers’ energy costs: an extra £196 million (US $265 million) in 2024 alone. The costs stem from NESO having to ramp up gas-fired generators to meet demand down south while simultaneously compensating wind-farm operators for curtailing their output, as required under U.K. policy. So National Grid keeps tweaking Penwortham. In 2011 the substation got its first big GET: phase-shifting transformers (PSTs), a type of analog flow controller. PSTs adjust power flow by creating an AC waveform whose alternating voltage leads or lags its alternating current. They do so by each PST using a pair of connected transformers to selectively combine power from an AC transmission circuit’s three phases. Motors reposition electrical connections on the transformer coils to adjust flows. Phase-shifting transformers (PSTs) were installed in 2012 at the Penwortham substation and are the analog predecessor to SmartValves. They’re powerful but also bulky and relatively inflexible. It can take 10 minutes or more for the PST’s motorized actuators at Penwortham to tap their full range of flow control, whereas SmartValves can shift within milliseconds.National Grid Electricity Transmission Penwortham’s pair of 540-tonne PSTs occupy the entire south end of the substation, along with their dedicated chillers, relays, and power supplies. Delivering all that hardware required extensive road closures and floating a huge barge up the adjacent River Ribble, an event that made national news. The SmartValves at Penwortham stand in stark contrast to the PSTs’ heft, complexity, and mechanics. SmartValves are a type of static synchronous series compensator, or SSSC—a solid-state alternative to PSTs that employs power electronics to tweak power flows in milliseconds. I saw two sets of them tucked into a corner of the substation, occupying a quarter of the area of the PSTs. The SmartValve V103 design [above] experienced some teething and reliability issues that were ironed out with the technology’s next iteration, the V104. National Grid Electricity Transmission/Smart Wires The SmartValves are first and foremost an insurance policy to guard against a potentially crippling event: the sudden loss of one of the B7a’s 400-kV lines. If that were to happen, gigawatts of power would instantly seek another route over neighboring lines. And if it happened on a windy day, when lots of power is streaming in from the north, the resulting surge could overload the 275-kV circuits headed from Penwortham to Liverpool. The SmartValves’ job is to save the day. They do this by adding impedance to the 275-kV lines, thus acting to divert more power to the remaining 400-kV lines. This rerouting of power prevents a blackout that could potentially cascade through the grid. The upside to that protection is that NESO can safely schedule an additional 350 MW over the B7a. The savings add up. “That’s 350 MW of wind you’re no longer curtailing from wind farms. So that’s 350 times £100 a megawatt-hour,” says Leslie, at NESO. “That’s also 350 MW of gas-fired power that you don’t need to replace the wind. So that’s 350 times £120 a megawatt-hour. The numbers get big quickly.” Mark Osborne, the National Grid lead asset life-cycle engineer managing its SmartValve projects, estimates the devices are saving U.K. customers over £100 million (US $132 million) a year. At that rate, they’ll pay for themselves “within a few years,” Osborne says. By utility standards, where investments are normally amortized over decades, that’s “almost immediately,” he adds.How Do Grid-Enhancing Technologies Work? The way Smart Wires’ SSSC devices adjust power flow is based on emulating impedance, which is a strange beast created by AC power. An AC flow’s changing magnetic field induces an additional voltage in the line’s conductor, which then acts as a drag on the initial field. Smart Wires’ SSSC devices alter power flow by emulating that natural process, effectively adding or subtracting impedance by adding their own voltage wave to the line. Adding a wave that leads the original voltage wave will boost flow, while adding a lagging wave will reduce flow. The SSSC’s submodules of capacitors and high-speed insulated-gate bipolar transistors operate in sequence to absorb power from a line and synthesize its novel impedance-altering waves. And thanks to its digital controls and switches, the device can within milliseconds flip from maximum power push to maximum pull. You can trace the development of SSSCs to the advent of HVDC transmission in the 1950s. HVDC converters take power from an AC grid and efficiently convert it and transfer it over a DC line to another point in the same grid, or to a neighboring AC grid. In 1985, Narain Hingorani, an HVDC expert at the Palo Alto–based Electric Power Research Institute, showed that similar converters could modulate the flow of an AC line. Four years later, Westinghouse engineer Laszlo Gyugyi proposed SSSCs, which became the basis for Smart Wires’ boxes. Major power-equipment manufacturers tried to commercialize SSSCs in the early 2000s. But utilities had little need for flow control back then because they had plenty of conventional power plants that could meet local demand when transmission lines were full. The picture changed as solar and wind generation exploded and conventional plants began shutting down. In years past, grid operators addressed grid congestion by turning power plants on or off in strategic locations. But as of 2024, the U.K. had shut down all of its coal-fired power plants—save one, which now burns wood—and it has vowed to slash gas-fired generation from about a quarter of electricity supply in 2024 to at most 5 percent in 2030.The U.K.’s extreme grid challenge has made it ground zero for some of the boldest GETs testing and deployment. To seize the emerging market opportunity presented by changing grid operations, Smart Wires had to make a crucial technology upgrade: ditching transformers. The company’s first SSSC, and those from other suppliers, relied on a transformer to absorb lightning, voltage surges, and every other grid assault that could fry their power electronics. This made them bulky and added cost. So Smart Wires engineers set to work in 2017 to see if they could live without the transformer, says Frank Kreikebaum, Smart Wires’s interim chief of engineering. Two years later the company had assembled a transformerless electronic shield. It consisted of a suite of filters and diverters, along with a control system to activate them. Ditching the transformer produced a trim, standardized product—a modular system-in-a-box. SmartValves work at any voltage and are generally ganged together to achieve a desired level of flow control. They can be delivered fast, and they fit in the kinds of tight spaces that are common in substations. “It’s not about cost, even though we’re competitive there. It’s about ‘how quick’ and ‘can it fit,’” says Kreikebaum. And if the grid’s pinch point shifts? The devices can be quickly moved to another substation. “It’s a Lego-brick build,” says Owen Wilkes, National Grid’s director of network design. Wilkes’s team decides where to add equipment based on today’s best projections, but he appreciates the flexibility to respond to unexpected changes. National Grid’s deployments in 2021 were the highest-voltage installation of SSSCs at the time, and success there is fueling expansion. National Grid now has packs of SmartValves installed at three substations in northern England and under construction at another, with five more installations planned in that area. Smart Wires has also commissioned commercial projects at transmission substations in Australia, Brazil, Colombia, and the United States.Dynamic Line Rating Boosts Grid Efficiency In addition to SSSCs, National Grid has deployed lidar that senses sag on Penwortham’s 275-kV lines—an indication that they’re starting to overheat. The sensors are part of a dynamic line rating system and help grid operators maximize the amount of current that high-voltage lines can carry based on near-real-time weather conditions. (Cooler weather means more capacity.) Now the same technology is being deployed across the B7a—a £1 million investment that is projected to save consumers £33 million annually, says Corin Ireland, a National Grid optimization engineer with the task of seizing GETs opportunities. There’s also a lot of old conductor wires being swapped out for those that can carry more power. National Grid’s business plan calls for 2,416 kilometers of such reconductoring over the coming five years, which is about 20 percent of its system. Scotland’s transmission operators are busy with their own big swaps. Scottish wind farms have doubled their power output over the past decade, but it often gets stranded due to grid congestion in England. Andreas Berthold/Alamy But while National Grid and NESO are making some of the boldest deployments of GETs in the world, they’re not fully tapping the technologies’ capabilities. That’s partly due to the conservative nature of power utilities, and partly because grid operators already have plenty to keep their eyes on. It also stems from the unknowns that still surround GETs, like whether they might take the grid in unforeseen directions if allowed to respond automatically, or get stuck in a feedback loop responding to each other. Imagine SmartValve controllers at different substations fighting, with one substation jumping to remove impedance that the other just added, causing fluctuating power flows. “These technologies operate very quickly, but the computers in the control room are still very reliant on people making decisions,” says Ireland. “So there are time scales that we have to take into consideration when planning and operating the network.” This kind of conservative dispatching leaves value on the table. For example, the dynamic line rating models can spit out new line ratings every 15 minutes, but grid operators get updates only every 24 hours. Fewer updates means fewer opportunities to tap the system’s ability to boost capacity. Similarly, for SmartValves, NESO activates installations at only one substation at a time. And control-room operators turn them on manually, even though the devices could automatically respond to faults within milliseconds. National Grid is upgrading transmission lines dating as far back as the 1960s. This includes installing conductors that retain their strength at higher temperatures, allowing them to carry more power. National Grid Electricity Transmission Modeling by Smart Wires and National Grid shows a significant capacity boost across Boundary B7a if Penwortham’s SmartValves were to work in tandem with another set further up the line. For example, when Penwortham is adding impedance to push megawatts off the 275-kV lines, a set closer to Scotland could simultaneously pull the power north, nudging the sum over to the B7a’s eastern circuits. Simulations by Andy Hiorns, a former National Grid planning director who consults for Smart Wires, suggest that this kind of cooperative action should increase the B7a circuits’ usable capacity by another 250 to 300 MW. “You double the effectiveness by using them as pairs,” he says. Operating multiple flow controllers may become necessary for unlocking the next boundary en route to London, south of the B7a, called Boundary B8. As dynamic line rating, beefier conductors, and SmartValves send more power across the B7a, lines traversing B8 are reaching their limits. Eventually, every boundary along the route will have to be upgraded. Meanwhile, back at its U.S. headquarters, Smart Wires is developing other applications for its SSSCs, such as filtering out power oscillations that can destabilize grids and reduce allowable transfers. That capability could be unlocked remotely with firmware. The company is also working on a test program that could turn on pairs of SmartValve installations during slack moments when there isn’t much going on in the control rooms. That would give National Grid and NESO operators an opportunity to observe the impacts, and to get more comfortable with the technology. National Grid and Smart Wires are also hard at work developing industry-first optimization software for coordinating flow-control devices. “It’s possible to extend the technology from how we’re using it today,” says Ireland at National Grid. “That’s the exciting bit.” NESO’s Julian Leslie shares that excitement and says he expects SmartValves to begin working together to ease power through the grid—once the operators have the modeling right and get a little more comfortable with the technology. “It’s a great innovation that has the potential to be really transformational,” he says. “We’re just not quite there yet.”
spectrum.ieee.org
December 14, 2025 at 10:05 AM
OpenAI Hires Slack CEO as New Chief Revenue Officer
OpenAI Hires Slack CEO as New Chief Revenue Officer
A memo obtained by WIRED confirms Denise Dresser's departure from Slack. She is now headed to OpenAI.
www.wired.com
December 14, 2025 at 1:20 AM
OpenAI Staffer Quits, Alleging Company’s Economic Research Is Drifting Into AI Advocacy
OpenAI Staffer Quits, Alleging Company’s Economic Research Is Drifting Into AI Advocacy
Four sources close to the situation claim OpenAI has become hesitant to publish research on the negative impact of AI. The company says it has only expanded the economic research team’s scope.
www.wired.com
December 12, 2025 at 7:10 AM
Virtual Power Plants Are Finally Having Their Moment
Virtual Power Plants Are Finally Having Their Moment
German utility RWE implemented the first known virtual power plant (VPP) in 2008, aggregating nine small hydroelectric plants for a total capacity of 8.6 megawatts. In general, a VPP pulls together many small components—like rooftop solar, home batteries, and smart thermostats—into a single coordinated power system. The system responds to grid needs on demand, whether by making stored energy available or reducing energy consumption by smart devices during peak hours. VPPs had a moment in the mid-2010s, but market conditions and the technology weren’t quite aligned for them to take off. Electricity demand wasn’t high enough, and existing sources—coal, natural gas, nuclear, and renewables—met demand and kept prices stable. Additionally, despite the costs of hardware like solar panels and batteries falling, the software to link and manage these resources lagged behind, and there wasn’t much financial incentive for it to catch up. But times have changed, and less than a decade later, the stars are aligning in VPPs’ favor. They’re hitting a deployment inflection point, and they could play a significant role in meeting energy demand over the next 5 to 10 years in a way that’s faster, cheaper, and greener than other solutions. U.S. Electricity Demand Is Growing Electricity demand in the United States is expected to grow 25 percent by 2030 due to data center buildouts, electric vehicles, manufacturing, and electrification, according to estimates from technology consultant ICF International. At the same time, a host of bottlenecks are making it hard to expand the grid. There’s a backlog of at least three to five years on new gas turbines. Hundreds of gigawatts of renewables are languishing in interconnection queues, where there’s also a backlog of up to five years. On the delivery side, there’s a transformer shortage that could take up to five years to resolve, and a dearth of transmission lines. This all adds up to a long, slow process to add generation and delivery capacity, and it’s not getting faster anytime soon. “Fueling electric vehicles, electric heat, and data centers solely from traditional approaches would increase rates that are already too high,” says Brad Heavner, the executive director of the California Solar & Storage Association. Enter the vast network of resources that are already active and grid-connected—and the perfect storm of factors that make now the time to scale them. Adel Nasiri, a professor of electrical engineering at the University of South Carolina, says variability of loads from data centers and electric vehicles has increased, as has deployment of grid-scale batteries and storage. There are more distributed energy resources available than there were before, and the last decade has seen advances in grid management using autonomous controls. At the heart of it all, though, is the technology that stores and dispatches electricity on demand: batteries. Advances in Battery Technology Over the last 10 years, battery prices have plummeted: the average lithium-ion battery pack price fell from US $715 per kilowatt-hour in 2014 to $115 per kWh in 2024. Their energy density has simultaneously increased thanks to a combination of materials advancements, design optimization of battery cells, and improvements in the packaging of battery systems, says Oliver Gross, a senior fellow in energy storage and electrification at automaker Stellantis. The biggest improvements have come in batteries’ cathodes and electrolytes, with nickel-based cathodes starting to be used about a decade ago. “In many ways, the cathode limits the capacity of the battery, so by unlocking higher capacity cathode materials, we have been able to take advantage of the intrinsic higher capacity of anode materials,” says Greg Less, the director of the University of Michigan’s Battery Lab. Increasing the percentage of nickel in the cathode (relative to other metals) increases energy density because nickel can hold more lithium per gram than materials like cobalt or manganese, exchanging more electrons and participating more fully in the redox reactions that move lithium in and out of the battery. The same goes for silicon, which has become more common in anodes. However, there’s a trade-off: These materials cause more structural instability during the battery’s cycling. The anode and cathode are surrounded by a liquid electrolyte. The electrolyte has to be electrically and chemically stable when exposed to the anode and cathode in order to avoid safety hazards like thermal runaway or fires and rapid degradation. “The real revolution has been the breakthroughs in chemistry to make the electrolyte stable against more reactive cathode materials to get the energy density up,” says Gross. Chemical compound additives—many of them based on sulfur and boron chemistry—for the electrolyte help create stable layers between it and the anode and cathode materials. “They form these protective layers very early in the manufacturing process so that the cell stays stable throughout its life.” These advances have primarily been made on electric vehicle batteries, which differ from grid-scale batteries in that EVs are often parked or idle, while grid batteries are constantly connected and need to be ready to transfer energy. However, Gross says, “the same approaches that got our energy density higher in EVs can also be applied to optimizing grid storage. The materials might be a little different, but the methodologies are the same.” The most popular cathode material for grid storage batteries at the moment is lithium iron phosphate, or LFP. Thanks to these technical gains and dropping costs, a domino effect has been set in motion: The more batteries deployed, the cheaper they become, which fuels more deployment and creates positive feedback loops. Regions that have experienced frequent blackouts—like parts of Texas, California, and Puerto Rico—are a prime market for home batteries. Texas-based Base Power, which raised $1 billion in Series C funding in October, installs batteries at customers’ homes and becomes their retail power provider, charging the batteries when excess wind or solar production makes prices cheap, and then selling that energy back to the grid when demand spikes. There is, however, still room for improvement. For wider adoption, says Nasiri, “the installed battery cost needs to get under $100 per kWh for large VPP deployments.”Improvements in VPP Software The software infrastructure that once limited VPPs to pilot projects has matured into a robust digital backbone, making it feasible to operate VPPs at grid scale. Advances in AI are key: Many VPPs now use machine learning algorithms to predict load flexibility, solar and battery output, customer behavior, and grid stress events. This improves the dependability of a VPP’s capacity, which was historically a major concern for grid operators. While solar panels have advanced, VPPs have been held back by a lack of similar advancement in the needed software until recently.Sunrun Cybersecurity and interoperability standards are still evolving. Interconnection processes and data visibility in many areas aren’t consistent, making it hard to monitor and coordinate distributed resources effectively. In short, while the technology and economics for VPPs are firmly in place, there’s work yet to be done aligning regulation, infrastructure, and market design. On top of technical and cost constraints, VPPs have long been held back by regulations that prevented them from participating in energy markets like traditional generators. SolarEdge recently announced enrollment of more than 500 megawatt-hours of residential battery storage in its VPP programs. Tamara Sinensky, the company’s senior manager of grid services, says the biggest hurdle to achieving this milestone wasn’t technical—it was regulatory program design. California’s Demand Side Grid Support (DSGS) program, launched in mid-2022, pays homes, businesses, and VPPs to reduce electricity use or discharge energy during grid emergencies. “We’ve seen a massive increase in our VPP enrollments primarily driven by the DSGS program,” says Sinensky. Similarly, Sunrun’s Northern California VPP delivered 535 megawatts of power from home-based batteries to the grid in July, and saw a 400 percent increase in VPP participation from last year. FERC Order 2222, issued in 2020, requires regional grid operators to allow VPPs to sell power, reduce load, or provide grid services directly to wholesale market operators, and get paid the same market price as a traditional power plant for those services. However, many states and grid regions don’t yet have a process in place to comply with the FERC order. And because utilities profit from grid expansion and not VPP deployment, they’re not incentivized to integrate VPPs into their operations. Utilities “view customer batteries as competition,” says Heavner. According to Nasiri, VPPs would have a meaningful impact on the grid if they achieve a penetration of 2 percent of the market’s peak power. “Larger penetration of up to 5 percent for up to 4 hours is required to have a meaningful capacity impact for grid planning and operation,” he says. In other words, VPP operators have their work cut out for them in continuing to unlock the flexible capacity in homes, businesses, and EVs. Additional technical and policy advances could move VPPs from a niche reliability tool to a key power source and grid stabilizer for the energy tumult ahead.
spectrum.ieee.org
December 12, 2025 at 1:20 AM
🚀 Top 5 Mistakes New Investors Make 🚀

1. Skipping research 📚 = big regrets 😬
2. Emotional trading 🎢 = portfolio rollercoaster 📉
3. Ignoring diversification 🌍 = risky bets 🎲
4. Chasing trends 🏃♂️ = late to the party 🎉
5. No exit strategy 🚪 = stuck in losses 💸
December 10, 2025 at 3:37 PM
Create engaging videos faster with Mootion using AI-generated storyboards and edits from text or audio, perfect for creators, educators, and marketers to save time and boost content quality. #VideoCreation #AI
Complete Guide to Using Mootion for Engaging Multimedia Content
Unlock the potential of Mootion with this comprehensive guide, featuring tips and techniques for creating captivating multimedia content that engages audiences.
technoholic.me
December 10, 2025 at 3:35 PM
Why the Most “Accurate” Glucose Monitors Are Failing Some Users
Why the Most “Accurate” Glucose Monitors Are Failing Some Users
When Dan Heller received his first batch of Dexcom’s latest continuous glucose monitors in early 2023, he decided to run a small experiment: He wore the new biosensor and the previous generation at the same time to see how they compared in measuring his glucose levels. The new, seventh-generation model (aptly called the G7) made by San Diego-based healthcare company Dexcom had just begun shipping in the United States. Dexcom claimed the G7 to be the “most accurate sensor” available to the thousands of people with Type 1 diabetes who use continuous glucose monitors to help manage their blood sugars. But Heller found that its real-world performance wasn’t up to par. In a September 2023 post on his Substack, which is dedicated to covering Type 1 diabetes research and management, he wrote about the experience and predicted an increase in adverse events with the G7, drawing on his past experience leading tech and biotech companies. In the two years since Heller’s experiment, many other users have reported issues with the device. Some complaints regard failed connection and deployment issues, which Dexcom claims to have now addressed. More concerning are reports of erratic, inaccurate readings. A public Facebook group dedicated to sharing negative experiences with the G7 has grown to thousands of users, and several class action lawsuits have been filed against the company, alleging false advertising and misleading claims about device accuracy. Yet, based on a standard metric in the industry, the G7 is one of the most accurate glucose sensors available. “Accuracy in the performance of our device is our number one priority. We understand this is a lifesaving device for people with Type 1 diabetes,” Peter Simpson, Dexcom’s senior vice president of innovation and sensor technology, told IEEE Spectrum. Simpson acknowledged some variability in individual sensors, but stood by the accuracy of the devices. So why have users faced issues? In part, metrics used in marketing can be misleading compared to real world performance. Differences in study design, combined with complex biological realities, mean that the accuracy of these biosensors can’t be boiled down to one number—and users are learning this the hard way. Dexcom’s Glucose Monitors Continuous glucose monitors (CGMs) typically consist of a small filament inserted under the skin, a transmitter, and a receiver. The filament is coated with an enzyme that generates an electrical signal when it reacts with glucose in the fluid surrounding the body’s cells. That signal is then converted to a digital signal and processed to generate glucose readings every few minutes over. Each sensor lasts a week or two before needing to be replaced. The technology has come a long way in recent years. In the 2010s, these devices required blood glucose calibrations twice a day and still weren’t reliable enough to dose insulin based on the readings. Now, some insulin pumps use the near-real-time data to automatically make adjustments. With those improvements has come greater trust in the data users receive—and higher standards. A faulty reading could result in a dangerous dose of insulin. The G7 introduced several changes to Dexcom’s earlier designs, including a much smaller footprint, and updated the algorithm used to translate sensor signals into glucose readings for better accuracy, Simpson says. “From a performance perspective, we did demonstrate in a clinical trial that the G7 is significantly more accurate than the G6,” he says. So Heller and others were surprised when the new Dexcom sensor seemed to be performing worse. For some batches of sensors, it’s possible that the issue was in part due to an unvalidated change in a component used in a resistive layer of the sensors. The new component showed worse performance, according to a warning letter issued by the U.S. Food and Drug Administration in March 2025, following an audit of two U.S. manufacturing sites. The material has since been removed from all G7 sensors, Simpson says, and the company is continuing to work with the FDA to address concerns. (“The warning letter does not restrict Dexcom’s ability to produce, market, manufacture or distribute products, require recall of any products, nor restrict our ability to seek clearance of new products,” Dexcom added in a statement.) “There is a distribution of accuracies that have to do with people’s physiology and also the devices themselves. Even our clinical studies, we saw some that were really precise and some that had a little bit of inaccuracy to them,” says Simpson. “But in general, our sensor is very accurate.” In late November Abbott—one of Dexcom’s main competitors—recalled some of its CGMs due to inaccurate low glucose readings. The recall affects approximately 3 million sensors and was caused by an issue with one of Abbott’s production lines. The discrepancy between reported accuracy and user experience, however, goes beyond any one company’s manufacturing missteps. Does MARD Matter? The accuracy of CGM systems is frequently measured via “mean absolute relative difference,” or MARD, a percentage that compares the sensor readings to laboratory blood glucose measurements. The lower the MARD, the more accurate the sensor. This number is often used in advertising and marketing, and it has a historical relevance, says Manuel Eichenlaub, a biomedical engineer at the Institute for Diabetes Technology Ulm in Germany, where he and his colleagues conduct independent CGM performance studies. For years, there was a general belief that a MARD under 10 percent meant a system would be accurate enough to be used for insulin dosing. In 2018, the FDA established a specific set of accuracy requirements beyond MARD for insulin-guiding glucose monitors, including Dexcom’s. But manufacturers design the clinical trials that determine accuracy metrics, and the way studies are designed can make a big difference. For instance, blood glucose levels serve as the “ground truth to compare the CGM values against,” says Eichenlaub. But glucose levels vary across blood compartments in the body; blood collected from capillaries with a finger prick fluctuates more and can have glucose levels around 5 to 10 percent higher than venous blood. (Dexcom tests against a gold-standard venous blood analyzer. When users see inaccuracies against home meters that use capillary blood, it could in part be a reflection of the meter’s own inaccuracy, Simpson says, though he acknowledges real inaccuracies in CGMs as well.) Additionally, the distribution of sampling isn’t standardized. CGMs are known to be less accurate at the beginning and end of use, or when glucose levels are out of range or changing quickly. That means measured accuracy could be skewed by taking fewer samples right after a meal or late in the CGM’s lifetime. According to Simpson, Dexcom’s trial protocol meets the FDA’s expectation and tests the devices in different blood sugar ranges across the life of the sensor. “Within these clinical trials, we do stress the sensors to try and simulate those real world conditions,” he says. Dexcom and other companies advertise a MARD around 8 percent. But some independent studies are more demanding and find higher numbers; a head-to-head study of three popular CGMs that Eichenlaub led found MARD values closer to 10 percent or higher. Eichenlaub and other CGM experts believe that more standardization of testing and an extension of the FDA requirements are necessary, so they recently proposed comprehensive guidelines on CGM performance testing. In the United States and Europe, a few manufacturers currently dominate the market. But newer players are entering the growing market and, especially in Europe, may not meet the same standards as legacy manufacturers, he says. “Having a standardized way of evaluating the performance of those systems is very important.” For users like Heller though, better accuracy only matters if it yields better diabetes management. “I don’t care about MARD. I want data that is reliably actionable,” Heller says. He encourages engineers working on these devices to think like the patient. “At some point, there’s quantitative data, but you need qualitative data.”
spectrum.ieee.org
December 10, 2025 at 7:10 AM
Unlocking the future, one algorithm at a time!
#MachineLearning #AI #DataScience #TechInnovation #FutureTech #LearnAndGrow
December 10, 2025 at 2:01 AM
Echo Show's new Shopping Essentials makes buying products and tracking orders easier in one seamless experience. Stay organized and save time with this smarter shopping update. #SmartHome #ShoppingTech
Alexa Plus can automatically buy stuff when the price drops
Set it and (hopefully don’t) forget it.
www.theverge.com
December 10, 2025 at 1:20 AM
OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice
OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice
American AI giants are backing a new effort to establish open standards for building agentic software and tools.
www.wired.com
December 9, 2025 at 10:26 PM
India's AI journey accelerates with a $17.5B investment, focusing on scale, skills, and sovereignty to drive innovation and inclusive growth. #AIIndia #TechTransformation

news.microsoft.com/source/asia/...
December 9, 2025 at 4:10 PM
Is this real? #softwareengineering
December 9, 2025 at 12:35 PM