Scott B
banner
scottbartlett.bsky.social
Scott B
@scottbartlett.bsky.social
31 followers 42 following 320 posts
Posts Media Videos Starter Packs
#AI Why IT leaders should pay attention to Canva’s ‘imagination era’ strategy
Why IT leaders should pay attention to Canva’s ‘imagination era’ strategy
The rise of AI marks a critical shift away from decades defined by information-chasing and a push for more and more compute power.  Canva co-founder and CPO Cameron Adams refers to this dawning time as the “imagination era.” Meaning: Individuals and enterprises must be able to turn creativity into action with AI.   Canva hopes to position itself at the center of this shift with a sweeping new suite of tools. The company’s new Creative Operating System (COS) integrates AI across every layer of content creation, creating a single, comprehensive creativity platform rather than a simple, template-based design tool. “We’re entering a new era where we need to rethink how we achieve our goals,” said Adams. “We’re enabling people’s imagination and giving them the tools they need to take action.” An 'engine' for creativity Adams describes Canva’s platform as a three-layer stack: The top Visual Suite layer containing designs, images and other content; a collaborative Canva AI plane at center; and a foundational proprietary model holding it all up.  At the heart of Canva’s strategy is its Creative Operating System (COS) underlying. This “engine,” as Adams describes it, integrates documents, websites, presentations, sheets, whiteboards, videos, social content, hundreds of millions of photos, illustrations, a rich sound library, and numerous templates, charts, and branded elements. The COS is getting a 2.0 upgrade, but the crucial advance is the “middle, crucial layer” that fully integrates AI and makes it accessible throughout various workflows, Adams explained. This gives creative and technical teams a single dashboard for generating, editing and launching all types of content. The underlying model is trained to understand the “complexity of design” so the platform can build out various elements — such as photos, videos, textures, or 3D graphics — in real time, matching branding style without the need for manual adjustments. It also supports live collaboration, meaning teams across departments can co-create.  With a unified dashboard, a user working on a specific design, for instance, can create a new piece of content (say, a presentation) within the same workflow, without having to switch to another window or platform. Also, if they generate an image and aren’t pleased with it, they don’t have to go back and create from scratch; they can immediately begin editing, changing colors or tone.  Another new capability in COS, “Ask Canva,” provides direct design advice. Users can tag @Canva to get copy suggestions and smart edits; or, they can highlight an image and direct the AI assistant to modify it or generate variants.  “It’s a really unique interaction,” said Adams, noting that this AI design partner is always present. “It’s a real collaboration between people and AI, and we think it’s a revolutionary change.” Other new features include a 2.0 video editor and interactive form and email design with drag-and-drop tools. Further, Canva is now incorporated with Affinity, its unified app for pro designers incorporating vector, pixel and layer workflows, and Affinity is “free forever.”  Automating intelligence, supporting marketing Branding is critical for enterprise; Canva has introduced new tools to help organizations consistently showcase theirs across platforms. The new Canva Grow engine integrates business objectives into the creative process so teams can workshop, create, distribute and refine ads and other materials.  As Adams explained: “It automatically scans your website, figures out who your audience is, what assets you use to promote your products, the message it needs to send out, the formats you want to send it out in, makes a creative for you, and you can deploy it directly to the platform without having to leave Canva.” Marketing teams can now design and launch ads across platforms like Meta, track insights as they happen and refine future content based on performance metrics. “Your brand system is now available inside the AI you’re working with,” Adams noted.  Success metrics and enterprise adoption The impact of Canva’s COS is reflected in notable user metrics: More than 250 million people use Canva every month, just over 29 million of which are paid subscribers. Adams reports that 41 billion designs have been created on Canva since launch, which equates to 1 billion each month.  “If you break that down, it turns into the crazy number of 386 designs being created every single second,” said Adams. Whereas in the early days, it took roughly an hour for users to create a single design.  Canva customers include Walmart, Disney, Virgin Voyages, Pinterest, FedEx, Expedia and eXp Realty. DocuSign, for one, reported that it unlocked more than 500 hours of team capacity and saved $300,000-plus in design hours by fully integrating Canva into its content creation. Disney, meanwhile, uses translation capabilities for its internationalization work, Adams said.  Competitors in the design space Canva plays in an evolving landscape of professional design tools including Adobe Express and Figma; AI-powered challengers led by Microsoft Designer; and direct consumer alternatives like Visme and Piktochart. Adobe Express (starting at $9.99 a month for premium features) is known for its ease of use and integration with the broader Adobe Creative Cloud ecosystem. It features professional-grade templates and access to Adobe’s extensive stock library, and has incorporated Google's Gemini 2.5 Flash image model and other gen AI features so that designers can create graphics via natural language prompts. Users with some design experience say they prefer its interface, controls and technical advantages over Canva (such as the ability to import high-fidelity PDFs).  Figma (starting at $3 a month for professional plans) is touted for its real-time collaboration, advanced prototyping capabilities and deep integration with dev workflows; however, some say it has a steeper learning curve and higher-precision design tools, making it preferable for professional designers, developers and product teams working on more complex projects.  Microsoft Designer (free version available; although a Microsoft 365 subscription starting at $9.99 a month unlocks additional features) benefits from its integration with Microsoft’s AI capabilities, Copilot layout and text generation and Dall-E powered image generation. The platform’s “Inspire Me” and “New Ideas” buttons provide design variations, and users can also import data from Excel, add 3D models from PowerPoint and access images from OneDrive.  However, users report that its stock photos and template and image libraries are limited compared to Canva's extensive collection, and its visuals can come across as outdated.  Canva’s advantage seems to be in its extensive template library (more than 600,000 ready-to-use) and asset library (141 million-plus stock photos, videos, graphics, and audio elements).​ Its platform is also praised for its ease of use and interface friendly to non-designers, allowing them to begin quickly without training.  Canva has also expanded into a variety of content types — documents, websites, presentations, whiteboards, videos, and more — making its platform a comprehensive visual suite than just a graphics tool.  Canva has four pricing tiers: Canva Free for one user; Canva Pro for $120 a year for one person; Canva Teams for $100 a year for each team member; and the custom-priced Canva Enterprise.  Key takeaways: Be open, embrace human-AI collaboration Canva’s COS is underpinned by Canva’s frontier model, an in-house, proprietary engine based on years of R&D and research partnerships, including the acquisition of visual AI company Leonardo. Adams notes that Canva works with top AI providers including OpenAI, Anthropic and Google.  For technology teams, Canva’s approach offers important lessons, including a commitment to openness. “There are so many models floating around,” Adams noted; it’s important for enterprises to recognize when they should work with top models and when they should develop their own proprietary ones, he advised.  For instance, OpenAI and Anthropic recently announced integrations with Canva as a visual layer because, as Adams explained, they realized they didn’t have the capability to create the same kinds of editable designs that Canva can. This creates a mutually-beneficial ecosystem.  Ultimately, Adams noted: “We have this underlying philosophy that the future is people and technology working together. It's not an either or. We want people to be at the center, to be the ones with the creative spark, and to use AI as a collaborator.”
sdbart.co
#AI Vibe coding platform Cursor releases first in-house LLM, Composer, promising 4X speed boost
Vibe coding platform Cursor releases first in-house LLM, Composer, promising 4X speed boost
The vibe coding tool Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary coding large language model (LLM) as part of its Cursor 2.0 platform update. Composer is designed to execute coding tasks quickly and accurately in production-scale environments, representing a new step in AI-assisted programming. It's already being used by Cursor’s own engineering staff in day-to-day development — indicating maturity and stability. According to Cursor, Composer completes most interactions in less than 30 seconds while maintaining a high level of reasoning ability across large and complex codebases. The model is described as four times faster than similarly intelligent systems and is trained for “agentic” workflows—where autonomous coding agents plan, write, test, and review code collaboratively. Previously, Cursor supported "vibe coding" — using AI to write or complete code based on natural language instructions from a user, even someone untrained in development — atop other leading proprietary LLMs from the likes of OpenAI, Anthropic, Google, and xAI. These options are still available to users. Benchmark Results Composer’s capabilities are benchmarked using "Cursor Bench," an internal evaluation suite derived from real developer agent requests. The benchmark measures not just correctness, but also the model’s adherence to existing abstractions, style conventions, and engineering practices. On this benchmark, Composer achieves frontier-level coding intelligence while generating at 250 tokens per second — about twice as fast as leading fast-inference models and four times faster than comparable frontier systems. Cursor’s published comparison groups models into several categories: “Best Open” (e.g., Qwen Coder, GLM 4.6), “Fast Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the strongest model available midyear), and “Best Frontier” (including GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among all tested classes. A Model Built with Reinforcement Learning and Mixture-of-Experts Architecture Research scientist Sasha Rush of Cursor provided insight into the model’s development in posts on the social network X, describing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) model: “We used RL to train a big MoE model to be really good at real-world coding, and also very fast.” Rush explained that the team co-designed both Composer and the Cursor environment to allow the model to operate efficiently at production scale: “Unlike other ML systems, you can’t abstract much from the full-scale system. We co-designed this project and Cursor together in order to allow running the agent at the necessary scale.” Composer was trained on real software engineering tasks rather than static datasets. During training, the model operated inside full codebases using a suite of production tools—including file editing, semantic search, and terminal commands—to solve complex engineering problems. Each training iteration involved solving a concrete challenge, such as producing a code edit, drafting a plan, or generating a targeted explanation. The reinforcement loop optimized both correctness and efficiency. Composer learned to make effective tool choices, use parallelism, and avoid unnecessary or speculative responses. Over time, the model developed emergent behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously. This design enables Composer to work within the same runtime context as the end-user, making it more aligned with real-world coding conditions—handling version control, dependency management, and iterative testing. From Prototype to Production Composer’s development followed an earlier internal prototype known as Cheetah, which Cursor used to explore low-latency inference for coding tasks. “Cheetah was the v0 of this model primarily to test speed,” Rush said on X. “Our metrics say it [Composer] is the same speed, but much, much smarter.” Cheetah’s success at reducing latency helped Cursor identify speed as a key factor in developer trust and usability. Composer maintains that responsiveness while significantly improving reasoning and task generalization. Developers who used Cheetah during early testing noted that its speed changed how they worked. One user commented that it was “so fast that I can stay in the loop when working with it.” Composer retains that speed but extends capability to multi-step coding, refactoring, and testing tasks. Integration with Cursor 2.0 Composer is fully integrated into Cursor 2.0, a major update to the company’s agentic development environment. The platform introduces a multi-agent interface, allowing up to eight agents to run in parallel, each in an isolated workspace using git worktrees or remote machines. Within this system, Composer can serve as one or more of those agents, performing tasks independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best output. Cursor 2.0 also includes supporting features that enhance Composer’s effectiveness: * In-Editor Browser (GA) – enables agents to run and test their code directly inside the IDE, forwarding DOM information to the model. * Improved Code Review – aggregates diffs across multiple files for faster inspection of model-generated changes. * Sandboxed Terminals (GA) – isolate agent-run shell commands for secure local execution. * Voice Mode – adds speech-to-text controls for initiating or managing agent sessions. While these platform updates expand the overall Cursor experience, Composer is positioned as the technical core enabling fast, reliable agentic coding. Infrastructure and Training Systems To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs. The team developed specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead. This configuration allows Cursor to train models natively at low precision without requiring post-training quantization, improving both inference speed and efficiency. Composer’s training relied on hundreds of thousands of concurrent sandboxed environments—each a self-contained coding workspace—running in the cloud. The company adapted its Background Agents infrastructure to schedule these virtual machines dynamically, supporting the bursty nature of large RL runs. Enterprise Use Composer’s performance improvements are supported by infrastructure-level changes across Cursor’s code intelligence stack. The company has optimized its Language Server Protocols (LSPs) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates multi-file updates. Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor’s Teams and Enterprise tiers also support pooled model usage, SAML/OIDC authentication, and analytics for monitoring agent performance across organizations. Pricing for individual users ranges from Free (Hobby) to Ultra ($200/month) tiers, with expanded usage limits for Pro+ and Ultra subscribers. Business pricing starts at $40 per user per month for Teams, with enterprise contracts offering custom usage and compliance options. Composer’s Role in the Evolving AI Coding Landscape Composer’s focus on speed, reinforcement learning, and integration with live coding workflows differentiates it from other AI development assistants such as GitHub Copilot or Replit’s Agent. Rather than serving as a passive suggestion engine, Composer is designed for continuous, agent-driven collaboration, where multiple autonomous systems interact directly with a project’s codebase. This model-level specialization—training AI to function within the real environment it will operate in—represents a significant step toward practical, autonomous software development. Composer is not trained only on text data or static code, but within a dynamic IDE that mirrors production conditions. Rush described this approach as essential to achieving real-world reliability: the model learns not just how to generate code, but how to integrate, test, and improve it in context. What It Means for Enterprise Devs and Vibe Coding With Composer, Cursor is introducing more than a fast model—it’s deploying an AI system optimized for real-world use, built to operate inside the same tools developers already rely on. The combination of reinforcement learning, mixture-of-experts design, and tight product integration gives Composer a practical edge in speed and responsiveness that sets it apart from general-purpose language models. While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the core innovation that makes those workflows viable. It’s the first coding model built specifically for agentic, production-level coding—and an early glimpse of what everyday programming could look like when human developers and autonomous models share the same workspace.
sdbart.co
#AI IBM's open source Granite 4.0 Nano AI models are small enough to run locally directly in your browser
IBM's open source Granite 4.0 Nano AI models are small enough to run locally directly in your browser
In an industry where model size is often seen as a proxy for intelligence, IBM is charting a different course — one that values efficiency over enormity, and accessibility over abstraction. The 114-year-old tech giant's four new Granite 4.0 Nano models, released today, range from just 350 million to 1.5 billion parameters, a fraction of the size of their server-bound cousins from the likes of OpenAI, Anthropic, and Google. These models are designed to be highly accessible: the 350M variants can run comfortably on a modern laptop CPU with 8–16GB of RAM, while the 1.5B models typically require a GPU with at least 6–8GB of VRAM for smooth performance — or sufficient system RAM and swap for CPU-only inference. This makes them well-suited for developers building applications on consumer hardware or at the edge, without relying on cloud compute. In fact, the smallest ones can even run locally on your own web browser, as Joshua Lochner aka Xenova, creator of Transformer.js and a machine learning engineer at Hugging Face, wrote on the social network X. All the Granite 4.0 Nano models are released under the Apache 2.0 license — perfect for use by researchers and enterprise or indie developers, even for commercial usage. They are natively compatible with llama.cpp, vLLM, and MLX and are certified under ISO 42001 for responsible AI development — a standard IBM helped pioneer. But in this case, small doesn't mean less capable — it might just mean smarter design. These compact models are built not for data centers, but for edge devices, laptops, and local inference, where compute is scarce and latency matters. And despite their small size, the Nano models are showing benchmark results that rival or even exceed the performance of larger models in the same category. The release is a signal that a new AI frontier is rapidly forming — one not dominated by sheer scale, but by strategic scaling. What Exactly Did IBM Release? The Granite 4.0 Nano family includes four open-source models now available on Hugging Face: * Granite-4.0-H-1B (~1.5B parameters) – Hybrid-SSM architecture * Granite-4.0-H-350M (~350M parameters) – Hybrid-SSM architecture * Granite-4.0-1B – Transformer-based variant, parameter count closer to 2B * Granite-4.0-350M – Transformer-based variant The H-series models — Granite-4.0-H-1B and H-350M — use a hybrid state space architecture (SSM) that combines efficiency with strong performance, ideal for low-latency edge environments. Meanwhile, the standard transformer variants — Granite-4.0-1B and 350M — offer broader compatibility with tools like llama.cpp, designed for use cases where hybrid architecture isn’t yet supported. In practice, the transformer 1B model is closer to 2B parameters, but aligns performance-wise with its hybrid sibling, offering developers flexibility based on their runtime constraints. “The hybrid variant is a true 1B model. However, the non-hybrid variant is closer to 2B, but we opted to keep the naming aligned to the hybrid variant to make the connection easily visible,” explained Emma, Product Marketing lead for Granite, during a Reddit "Ask Me Anything" (AMA) session on r/LocalLLaMA. A Competitive Class of Small Models IBM is entering a crowded and rapidly evolving market of small language models (SLMs), competing with offerings like Qwen3, Google's Gemma, LiquidAI’s LFM2, and even Mistral’s dense models in the sub-2B parameter space. While OpenAI and Anthropic focus on models that require clusters of GPUs and sophisticated inference optimization, IBM’s Nano family is aimed squarely at developers who want to run performant LLMs on local or constrained hardware. In benchmark testing, IBM’s new models consistently top the charts in their class. According to data shared on X by David Cox, VP of AI Models at IBM Research: * On IFEval (instruction following), Granite-4.0-H-1B scored 78.5, outperforming Qwen3-1.7B (73.1) and other 1–2B models. * On BFCLv3 (function/tool calling), Granite-4.0-1B led with a score of 54.8, the highest in its size class. * On safety benchmarks (SALAD and AttaQ), the Granite models scored over 90%, surpassing similarly sized competitors. Overall, the Granite-4.0-1B achieved a leading average benchmark score of 68.3% across general knowledge, math, code, and safety domains. This performance is especially significant given the hardware constraints these models are designed for. They require less memory, run faster on CPUs or mobile devices, and don’t need cloud infrastructure or GPU acceleration to deliver usable results. Why Model Size Still Matters — But Not Like It Used To In the early wave of LLMs, bigger meant better — more parameters translated to better generalization, deeper reasoning, and richer output. But as transformer research matured, it became clear that architecture, training quality, and task-specific tuning could allow smaller models to punch well above their weight class. IBM is banking on this evolution. By releasing open, small models that are competitive in real-world tasks, the company is offering an alternative to the monolithic AI APIs that dominate today’s application stack. In fact, the Nano models address three increasingly important needs: * Deployment flexibility — they run anywhere, from mobile to microservers. * Inference privacy — users can keep data local with no need to call out to cloud APIs. * Openness and auditability — source code and model weights are publicly available under an open license. Community Response and Roadmap Signals IBM’s Granite team didn’t just launch the models and walk away — they took to Reddit’s open source community r/LocalLLaMA to engage directly with developers. In an AMA-style thread, Emma (Product Marketing, Granite) answered technical questions, addressed concerns about naming conventions, and dropped hints about what’s next. Notable confirmations from the thread: * A larger Granite 4.0 model is currently in training * Reasoning-focused models ("thinking counterparts") are in the pipeline * IBM will release fine-tuning recipes and a full training paper soon * More tooling and platform compatibility is on the roadmap Users responded enthusiastically to the models’ capabilities, especially in instruction-following and structured response tasks. One commenter summed it up: “This is big if true for a 1B model — if quality is nice and it gives consistent outputs. Function-calling tasks, multilingual dialog, FIM completions… this could be a real workhorse.” Another user remarked: “The Granite Tiny is already my go-to for web search in LM Studio — better than some Qwen models. Tempted to give Nano a shot.” Background: IBM Granite and the Enterprise AI Race IBM’s push into large language models began in earnest in late 2023 with the debut of the Granite foundation model family, starting with models like Granite.13b.instruct and Granite.13b.chat. Released for use within its Watsonx platform, these initial decoder-only models signaled IBM’s ambition to build enterprise-grade AI systems that prioritize transparency, efficiency, and performance. The company open-sourced select Granite code models under the Apache 2.0 license in mid-2024, laying the groundwork for broader adoption and developer experimentation. The real inflection point came with Granite 3.0 in October 2024 — a fully open-source suite of general-purpose and domain-specialized models ranging from 1B to 8B parameters. These models emphasized efficiency over brute scale, offering capabilities like longer context windows, instruction tuning, and integrated guardrails. IBM positioned Granite 3.0 as a direct competitor to Meta’s Llama, Alibaba’s Qwen, and Google's Gemma — but with a uniquely enterprise-first lens. Later versions, including Granite 3.1 and Granite 3.2, introduced even more enterprise-friendly innovations: embedded hallucination detection, time-series forecasting, document vision models, and conditional reasoning toggles. The Granite 4.0 family, launched in October 2025, represents IBM’s most technically ambitious release yet. It introduces a hybrid architecture that blends transformer and Mamba-2 layers — aiming to combine the contextual precision of attention mechanisms with the memory efficiency of state-space models. This design allows IBM to significantly reduce memory and latency costs for inference, making Granite models viable on smaller hardware while still outperforming peers in instruction-following and function-calling tasks. The launch also includes ISO 42001 certification, cryptographic model signing, and distribution across platforms like Hugging Face, Docker, LM Studio, Ollama, and watsonx.ai. Across all iterations, IBM’s focus has been clear: build trustworthy, efficient, and legally unambiguous AI models for enterprise use cases. With a permissive Apache 2.0 license, public benchmarks, and an emphasis on governance, the Granite initiative not only responds to rising concerns over proprietary black-box models but also offers a Western-aligned open alternative to the rapid progress from teams like Alibaba’s Qwen. In doing so, Granite positions IBM as a leading voice in what may be the next phase of open-weight, production-ready AI. A Shift Toward Scalable Efficiency In the end, IBM’s release of Granite 4.0 Nano models reflects a strategic shift in LLM development: from chasing parameter count records to optimizing usability, openness, and deployment reach. By combining competitive performance, responsible development practices, and deep engagement with the open-source community, IBM is positioning Granite as not just a family of models — but a platform for building the next generation of lightweight, trustworthy AI systems. For developers and researchers looking for performance without overhead, the Nano release offers a compelling signal: you don’t need 70 billion parameters to build something powerful — just the right ones.
sdbart.co
#AI PayPal’s Agentic Commerce Play Shows Why Flexibility, Not Standards, Will Define the Next E-Commerce Wave
PayPal’s Agentic Commerce Play Shows Why Flexibility, Not Standards, Will Define the Next E-Commerce Wave
While enterprises looking to sell goods and services online wait for the backbone of agentic commerce to be hashed out, PayPal is hoping its new features will bridge the gap. The payments company is launching a discoverability solution that allows enterprises to make its product available on any chat platform, regardless of the model or agent payment protocol.  PayPal, which is one of the participants for Google’s Agent Payments Protocol (AP2), found that it can leverage its relationship with merchants and enterprises to help pave the way for an easier transition into agentic commerce and offer the kind of flexibility they learned will benefit the ecosystem.  Michelle Gill, PayPal general manager for small business and financial services, told VentureBeat that AI-powered shopping will continue to grow, so enterprises and brands need to start laying the groundwork early.  “We think that merchants who've historically sold through web stores, particularly in the e-commerce space, are really going to need a way to get active on all of these large language models,” Gill said. “The challenge is that no one really knows how fast all of this is going to move. The issue that we’re trying to help merchants think through is how to do all of this as low-touch as possible while using the infrastructure you already have without doing a bazillion integrations.” She added AI shopping would also bring about “a resurgence from consumers trying to ensure their investment is protected.” PayPal partnered with website builder Wix, Cymbio, Commerce and Shopware to bring products to chat platforms like Perplexity.  Agent-powered shopping  PayPal’s Agentic Commerce Services include two features. The first is Agent Ready, which would allow existing PayPal merchants to accept payments on AI platforms. The second is called Shop Sync, which will enable companies’ product data to be discoverable through different AI chat interfaces. It takes a company’s catalog information and plug its inventory and fulfillment data to chat platforms.  Gill said the data goes into a central repository where AI models can ingest the information.  Right now, companies can access shop sync with Agent Ready coming in 2026.  Gill said Agentic Commerce Services is a one-to-many solution, that would be helpful right now, as different LLMs scrape different data sources to surface information.  Other benefits include: * Fast integration with current and future partners * More product discovery over the traditional search, browse and cart experiences * Preserved customer insights and relationships where the brand continues to have control over their records and communications with customers.  Right now, the service is only available through Perplexity, but Gill said more platforms will be added soon.  Fragmented AI platforms  Agentic commerce is still very much in the early stages. AI agents are just beginning to get better at reading a browser. while platforms like ChatGPT, Gemini and Perplexity can now surface products and services based on user queries, people cannot technically buy things from chat yet. There’s a race right now to create a standard to enable agents to transact on behalf of users and pay for items. Other than Google’s AP2, OpenAI and Stripe have the Agentic Commerce Protocol (ACP) and Visa launched its Trusted Agent Protocol.  Other than enabling a trust layer for agents to transact, another issue enterprises face with agentic commerce is fragmentation. Different chat platforms use different models which also interpret information in slightly different ways. Gill said PayPal learned that when it comes to working with merchants, flexibility is important.  “How do you decide if you're going to spend your time integrating with Google, Microsoft, ChatGPT or Perplexity? And each one of them right now has a different protocol, a different catalog, config, a different everything. That is a lot of time to make a bet as to like where you should spend your time,” Gill said. 
sdbart.co
#DataInfrastructure How AI-powered cameras are redefining business intelligence
How AI-powered cameras are redefining business intelligence
Presented by Axis Communications --- Many businesses are equipped with a network of intelligent eyes that span operations. These IP cameras and intelligent edge devices were once solely focused on ensuring the safety of employees, customers, and inventory. These technologies have long proved to be essential tools for businesses, and while this sentiment still rings true, they’re now emerging as powerful resources. These cameras and edge devices have rapidly evolved into real-time data producers. IP cameras can now see and understand, and the accompanying artificial intelligence helps companies and decision-makers generate business intelligence, improve operational efficiency, and gain a competitive advantage. By treating cameras as vision sensors and sources of operational insight, businesses can transform everyday visibility into measurable business value. Intelligence on the edge Network cameras have come a long way since Axis Communications first introduced this technology in 1996. Over time, innovations like the ARTPEC chip, the first chip purpose-built for IP video, helped enhance image quality, analytics, and encoding performance. Today, these intelligent devices are powering a new generation of business intelligence and operational efficiency solutions via embedded AI. Actionable insights are now fed directly into intelligence platforms, ERP systems, and real-time dashboards, and the results are significant and far-reaching. In manufacturing, intelligent cameras are detecting defects on the production line early, before an entire production run is compromised. In retail, these cameras can run software that maps customer journeys and optimizes product placement. In healthcare, these solutions help facilities enhance patient care while improving operational efficiency and reducing costs. The combination of video and artificial intelligence has significantly expanded what cameras can do — transforming them into vital tools for improving business performance. Proof in practice Companies are creatively taking advantage of edge devices like AI-enabled cameras to improve business intelligence and operational efficiencies. BMW has relied on intelligent IP cameras to optimize efficiency and product quality, with AI-driven video systems catching defects that are often invisible to the human eye. Or take Google Cloud’s shelf-checking AI technology, an innovative software that allows retailers to make instant restocking decisions using real-time data. These technologies appeal to far more than retailers and vendors. The A.C. Camargo Cancer Center in Brazil uses network cameras to reduce theft, assure visitor and employee safety, and optimize patient flow. By relying on newfound business intelligence, the facility has saved more than $2 million in operational costs through two years, with those savings being reinvested directly into patient care. Urban projects can also benefit from edge devices and artificial intelligence. For example, Vanderbilt University turned to video analytics to study traffic flow, relying on AI to uncover the causes of phantom congestion and enabling smarter traffic management. These studies will have additional impact on the local environment and public, as the learnings can be used to optimize safety, air quality, and fuel efficiency. Each case illustrates the same point: AI-powered cameras can fuel a tangible return on investment and crucial business intelligence, regardless of the industry. Preparing for the next phase The role of AI in video intelligence is still expanding, with several emerging trends driving greater advancements and impact in the years ahead: * Predictive operations: cameras that are capable of forecasting needs or risks through predictive analytics * Versatile analytics: systems that incorporate audio, thermal, and environmental sensors for more comprehensive and accurate insights * Technological collaboration: cameras that integrate with other intelligent edge devices to autonomously manage tasks * Sustainability initiatives: intelligent technologies that reduce energy use and support resource efficiency Axis Communications helps advance these possibilities with open-source, scalable systems engineered to address both today’s challenges and tomorrow’s opportunities. By staying ahead of this ever-changing environment, Axis helps ensure that organizations continue to benefit from actionable business intelligence while maintaining the highest standards of security and safety. Cameras have evolved beyond simple surveillance tools. They are strategic assets that inform operations, foster innovation, and enable future readiness. Business leaders who cling to traditional views of IP cameras and edge devices risk missing opportunities for efficiency and innovation. Those who embrace an AI-driven approach can expect not only stronger security but also better business outcomes. Ultimately, the value of IP cameras and edge devices lies not in categories but in capabilities. In an era of rapidly evolving artificial intelligence, these unique technologies will become indispensable to overall business success. --- About Axis Communications Axis enables a smarter and safer world by improving security, safety, operational efficiency, and business intelligence. As a network technology company and industry leader, Axis offers video surveillance, access control, intercoms, and audio solutions. These are enhanced by intelligent analytics applications and supported by high-quality training. Axis has around 5,000 dedicated employees in over 50 countries and collaborates with technology and system integration partners worldwide to deliver customer solutions. Axis was founded in 1984, and the headquarters are in Lund, Sweden. --- Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].
sdbart.co
#AI The unexpected benefits of AI PCs: why creativity could be the new productivity
The unexpected benefits of AI PCs: why creativity could be the new productivity
Presented by HP --- Creativity is quickly becoming the new measure of productivity. While AI is often framed as a tool for efficiency and automation, new research from MIT Sloan School of Management shows that generative AI enhances human creativity — when employees have the right tools and skills to use it effectively. That’s where AI PCs come in. These next-generation laptops combine local AI processing with powerful Neural Processing Units (NPUs), delivering the speed and security that knowledge workers expect while also unlocking new creative possibilities. By handling AI tasks directly on the device, AI PCs minimize latency, protect sensitive data, and lower energy consumption. Teams are already proving the impact. Marketing teams are using AI PCs to generate campaign assets in hours instead of weeks. Engineers are shortening design and prototyping cycles. Sales reps are creating personalized proposals onsite, even without cloud access. In each case, AI PCs are not just accelerating workflows — they’re sparking fresh ideas, faster iteration, and more engaged teams. The payoff is clear: creativity that translates into measurable business outcomes, from faster time-to-market and stronger compliance to deeper customer engagement. Still, adoption is uneven, and the benefits aren’t yet reaching the wider workforce. Early creative benefits, but a divide remains New Morning Consult and HP research shows nearly half of IT decision makers (45%) already use AI PCs for creative assistance, with almost a third (29%) using them for tasks like image generation and editing. That’s not just about efficiency — it’s about bringing imagination into everyday workflows. According to HP’s 2025 Work Relationship Index, fulfillment is the single biggest driver of a healthy work relationship, outranking even leadership. Give employees tools that let them create, not just execute tasks, and you unlock productivity, satisfaction, retention, and optimism. The same instinct that drives workers to build outside the office is the one companies can harness inside it. The challenge is that among broader knowledge workers, adoption is still low, just 29% for creative assistance and just 19% for image generation. This creative divide means the full potential of AI PCs hasn’t reached the wider workforce. For CIOs, the opportunity isn’t just deploying faster machines — it’s fostering a workplace culture where creativity drives measurable business value. Creative benefits of AI PCs So when you put AI PCs in front of the employees who embrace the possibilities, what does that look like in practice? Early adopters are already seeing AI PCs reshape how creative work gets done. Teams dream up fresh ideas, faster. AI PCs can spark new perspectives and out-of-the-box solutions, enhancing human creativity rather than replacing it. With dedicated NPUs handling AI workloads, employees stay in flow without interruptions. Battery life is extended, latency drops, and performance improves — allowing teams to focus on ideas, not wait times. On-device AI is opening new creative mediums, from visual design to video production to music editing, and videos, photos, and presentations that can be generated, edited, and refined in real time. Plus, AI workloads like summarization, transcription, and code generation run instantly without relying on cloud APIs. That means employees can work productively in low-bandwidth or disconnected environments, removing downtime risks, especially for mobile workforces and global deployments. And across the organization, AI PCs mean real-world, measurable business outcomes. Marketing: AI PCs enable creative teams to generate ad variations, social content, and campaign assets in minutes instead of days, reducing dependence on external agencies. And that leads to faster campaign launches, reduced external vendor spend, and increased pipeline velocity. Product and engineering: Designers/engineers can prototype in CAD, generate 3D mockups, or run simulations locally with on-device AI accelerators, shortening feedback loops. That means reduced iteration cycles, faster prototyping, and faster time-to-market. Sales/customer engagement: Reps can use AI PCs to generate real-time proposals, personalized presentations, or analyze contracts offline at client sites, even without cloud connection. This generates faster deal cycles, higher client engagement, and a shorter sales turnaround. From efficiency to fulfillment AI PCs are more than just a performance upgrade. They’re reshaping how people approach and experience work. By giving employees tools that spark creativity as well as productivity, organizations can unlock faster innovation, deeper engagement, and stronger retention. For CIOs, the opportunity goes beyond efficiency gains. The true value of AI PCs won’t be measured in speed or specs, but in how they open new possibilities for creation, collaboration, and competition — helping teams not just work faster, but work more creatively and productively. --- Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].
sdbart.co