AWS News Feed on 🦋
banner
awsrecentnews.bsky.social
AWS News Feed on 🦋
@awsrecentnews.bsky.social
I'm a bot 🤖
I'm sharing recent announcements from http://aws.amazon.com/new

For any issues please contact @ervinszilagyi.dev
Source code: https://github.com/Ernyoke/bsky-aws-news-feed
🆕 Amazon Q now analyzes SES email sending, helping users optimize configurations and troubleshoot deliverability with less technical knowledge. Q evaluates usage patterns and SES setups, offering insights without pre-knowledge. Available in all SES and Q regions.

#AWS #AmazonSes
Amazon Q now can analyze SES email sending
Today, Amazon Q (Q) added support for analyzing email sending in Amazon Simple Email Service (SES). Now customers can ask Q questions about their SES resource setup and usage patterns, and Q will help them optimize their configuration and troubleshoot deliverability problems. This makes it easier to manage SES operational activities with less technical knowledge. Previously, customers could use SES features such as Virtual Deliverability Manager to manage and explore their SES resource configuration and usage. SES provided convenient dashboard views and query tools to help customers find information, however customers needed deep understanding of email sending concepts to interact with the service. Now, customers can ask Q for help in optimizing resource configuration and troubleshooting deliverability challenges. Q will evaluate customer’s usage patterns and SES resource configuration, find the answers customers need, and help them understand the context without requiring pre-knowledge or manual exploration. Q supports SES resource analysis in all AWS Regions where SES and Q are available. For more information, see the Q documentation for information about interacting with SES through Q.
aws.amazon.com
December 5, 2025 at 11:39 PM
🆕 AWS Elastic Beanstalk adds Python 3.14 on Amazon Linux 2023 for better security and performance. Easily deploy and manage apps in all commercial regions. For more, see the AWS guide.

#AWS #AwsGovcloudUs #AwsElasticBeanstalk
AWS Elastic Beanstalk now supports Python 3.14 on Amazon Linux 2023
AWS Elastic Beanstalk now enables customers to build and deploy Python 3.14 applications on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest features and improvements in Python while taking advantage of the enhanced security and performance of AL2023. AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Python 3.14 on AL2023 delivers enhanced interactive interpreter capabilities, improved error messages, important security and API improvements. Developers can create Elastic Beanstalk environments running Python 3.14 on AL2023 through the Elastic Beanstalk Console, CLI, or API. This platform is available in all commercial AWS Regions where Elastic Beanstalk is available, including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. To learn more about Python 3.14 on Amazon Linux 2023, see the AWS Elastic Beanstalk Developer guide. For additional information, visit the AWS Elastic Beanstalk product page.
aws.amazon.com
December 5, 2025 at 9:40 PM
🆕 AWS makes CloudTrail event enablement in CloudWatch easy for unified monitoring. It uses service-linked channels for event delivery without trails, with safety-checks and termination protection. CloudWatch Logs fees apply. See CloudWatch docs for details.

#AWS #AwsCloudtrail #AmazonCloudwatch
AWS launches simplified enablement of AWS CloudTrail events in Amazon CloudWatch
Today, AWS launches simplified enablement of AWS CloudTrail events in Amazon CloudWatch, a monitoring and logging service that helps you collect, monitor, and analyze log data from your AWS resources and applications. With this launch, you can now centrally configure collection of CloudTrail events in CloudWatch alongside other popular AWS log sources such as Amazon VPC flow logs and Amazon EKS Control Plane Logs. CloudWatch's ingestion experience provides a consolidated view that simplifies collecting telemetry from different sources for accounts in your AWS Organization thus ensuring comprehensive monitoring and data collection across your AWS environment. This new integration leverages service-linked channels (SLCs) to receive events from CloudTrail without requiring trails, and also provides additional benefits such as safety-checks and termination protection. You incur both CloudTrail event delivery charges and CloudWatch Logs ingestion fees based on custom logs pricing. To learn more about enablement of CloudTrail events in CloudWatch and supported AWS regions, visit the Amazon CloudWatch documentation.
aws.amazon.com
December 5, 2025 at 8:40 PM
🆕 Amazon Connect now supports WhatsApp for outbound campaigns, enabling proactive, automated messaging for reminders, notifications, and updates, using the same interface as SMS, email, and voice. Available in all AWS regions.

#AWS #AmazonConnect
Amazon Connect launches WhatsApp channel for Outbound Campaigns
Amazon Connect Outbound Campaigns now supports WhatsApp, expanding on the WhatsApp Business messaging capabilities that already allow customers to contact your agents. You can now engage customers through proactive, automated campaigns on their preferred messaging platform, delivering timely communications such as appointment reminders, payment notifications, order updates, and product recommendations directly through WhatsApp. Setting up WhatsApp campaigns uses the same familiar Amazon Connect interface, where you can define your target audience, choose personalized message templates, schedule delivery times, and apply compliance guardrails, just as you do for SMS, voice, and email campaigns. Previously, Outbound Campaigns supported SMS, email, and voice channels, while WhatsApp was available only for customers to initiate conversations with your agents. With WhatsApp support in Outbound Campaigns, you can now proactively reach customers through an additional messaging platform while maintaining a unified campaign management experience. You can personalize WhatsApp messages using real-time customer data, track delivery and engagement metrics, and manage communication frequency and timing to ensure compliance. This expansion provides greater flexibility to connect with customers on their preferred platforms while streamlining your omnichannel outreach strategy. This feature is available in all AWS Regions where Amazon Connect Outbound Campaigns is supported. To learn more, visit the Amazon Connect Outbound Campaigns documentation.
aws.amazon.com
December 5, 2025 at 7:42 PM
🆕 AWS Elastic Beanstalk now supports Node.js 24 on Amazon Linux 2023, offering enhanced security and performance. Developers can deploy and manage Node.js apps without infrastructure concerns. Available in all commercial regions, including AWS GovCloud.

#AWS #AwsElasticBeanstalk
AWS Elastic Beanstalk now supports Node.js 24 on Amazon Linux 2023
AWS Elastic Beanstalk now enables customers to build and deploy Node.js 24 applications on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest features and improvements in Node.js while taking advantage of the enhanced security and performance of AL2023. AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Node.js 24 on AL2023 delivers updates to the V8 JavaScript engine, npm 11, and security and performance improvements. Developers can create Elastic Beanstalk environments running Node.js 24 on AL2023 through the Elastic Beanstalk Console, CLI, or API. This platform is available in all commercial AWS Regions where Elastic Beanstalk is available, including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. To learn more about Node.js 24 on Amazon Linux 2023, see the AWS Elastic Beanstalk Developer guide. For additional information, visit the AWS Elastic Beanstalk product page.
aws.amazon.com
December 5, 2025 at 7:42 PM
🆕 Amazon SES Mail Manager now available in 10 new regions, expanding to 27 total. Manage email routing, delivery, and compliance in all SES commercial regions. New regions include Bahrain, Jakarta, Cape Town, UAE, Hyderabad, Malaysia, Milan, Tel Aviv, Calgary, and Zurich.

#AWS #AmazonSes
SES Mail Manager is now available in 10 additional AWS Regions, 27 total
Amazon SES announces that the SES Mail Manager product is now available in 10 additional commercial AWS Regions. This expands coverage from the current 17 commercial AWS Regions where Mail Manager is launched, meaning that Mail Manager is now offered in all commercial Regions where SES offers its core Outbound service. SES Mail Manager allows customers to configure email routing and delivery mechanisms for their domains, and to have a single view of email governance, risk, and compliance solutions for all email workloads. Organizations commonly deploy Mail Manager to replace legacy hosted mail relays or simplify integration with third-party mailbox providers and email security solutions. Mail Manager also supports onward delivery to WorkMail mailboxes, built-in archiving with search and export capabilities, and integration with third-party security add-ons directly within the console. The 10 new Mail Manager Regions include Middle East (Bahrain), Asia Pacific (Jakarta), Africa (Cape Town), Middle East (UAE), Asia Pacific (Hyderabad), Asia Pacific (Malaysia), Europe (Milan), Israel (Tel Aviv), Canada West (Calgary), and Europe (Zurich). The full list of Mail Manager Region availability is here. To learn more, see the Amazon SES Mail Manager product page and the SES Mail Manager documentation. You can start using Mail Manager in these new Regions through the Amazon SES console.
aws.amazon.com
December 5, 2025 at 7:41 PM
🆕 Amazon SageMaker lets you migrate Notebook instances to latest versions easily, keeping data and settings intact, via UpdateNotebookInstance API. Available globally.

#AWS #AwsGovcloudUs #AmazonSagemakerStudio
Amazon SageMaker now supports self-service migration of Notebook instances to latest platform versions
Amazon SageMaker Notebook instance now supports self-service migration, allowing you to update your notebook instance platform identifier through the UpdateNotebookInstance API. This enables you to seamlessly transition from unsupported platform identifiers (notebook-al1-v1, notebook-al2-v1, notebook-al2-v2) to supported versions (notebook-al2-v3, notebook-al2023-v1). With the new PlatformIdentifier parameter in the UpdateNotebookInstance API, you can update to newer versions of the Notebook instance platform while preserving your existing data and configurations. The platform identifier determines which Operating System and JupyterLab version combination your notebook instance runs. This self-service capability simplifies the migration process and helps you keep your notebook instances current. This feature is supported through AWS CLI (version 2.31.27 or newer) and SDK, and is available in all AWS Regions where Amazon SageMaker Notebook instances are supported. To learn more, see Update a Notebook Instance in the Amazon SageMaker Developer Guide.
aws.amazon.com
December 5, 2025 at 7:41 PM
🆕 Amazon Bedrock expands TwelveLabs' Pegasus 1.2 video-first language model to 23 new regions via Global cross-region inference, enhancing availability and performance for video-intelligence applications, reducing latency, and simplifying architecture.

#AWS #AmazonBedrock
TwelveLabs’ Pegasus 1.2 model now in 23 new AWS regions via Global cross-region inference
Amazon Bedrock introduces Global cross-Region inference for TwelveLabs' Pegasus 1.2, expanding model availability to 23 new regions in addition to the seven regions where the model was already available. You can now also access the model in all EU regions in Amazon Bedrock using Geographic cross-Region inference. Geographic cross-Region inference is ideal for workloads with data residency or compliance requirements within a specific geographic boundary, while Global cross-Region inference is recommended for applications that prioritize availability and performance across multiple geographies. Pegasus 1.2 is a powerful video-first language model that can generate text based on the visual, audio, and textual content within videos. Specifically designed for long-form video, it excels at video-to-text generation and temporal understanding. With Pegasus 1.2's availability in these additional regions, you can now build video-intelligence applications closer to your data and end users, reducing latency and simplifying your architecture. For a complete list of supported inference profiles and regions for Pegasus 1.2, refer to the Cross-Region Inference documentation. To get started with Pegasus 1.2, visit the Amazon Bedrock console. To learn more, read the product page and Amazon Bedrock documentation.
aws.amazon.com
December 5, 2025 at 7:40 PM
🆕 Amazon SES now supports VPC endpoints for API access, enhancing security by eliminating internet gateways, allowing secure SES API use within VPCs in all AWS regions.

#AWS #AmazonSes #AwsGovcloudUs
Amazon SES adds VPC support for API endpoints
Today, Amazon Simple Email Service (SES) added support for accessing SES API endpoints through Virtual Private Cloud (VPC) endpoints. Customers use VPC endpoints to enable access to SES APIs for sending emails and managing their SES resource configuration. This release helps customers increase security in their VPCs. Previously, customers who ran their workloads in a VPC could access SES APIs by configuring an internet gateway resource in their VPC. This enabled traffic from the VPC to flow into the internet, and reach SES public API endpoints. Now, customers can use the VPC endpoints to access SES APIs without the need for an internet gateway, reducing the chances for activity in the VPC to be exposed to the internet.. SES supports VPC for SES API endpoints in all AWS Regions where SES is available. For more information, see the documentation for information about setting up VPC endpoints with Amazon SES.
aws.amazon.com
December 5, 2025 at 7:40 PM
🆕 Amazon OpenSearch Service now supports automatic semantic enrichment for managed clusters, offering contextual search and multi-lingual support, eliminating the need for managing machine learning models. Available in select regions for OpenSearch 2.19+.

#AWS #AmazonOpensearchService
Amazon OpenSearch Service now supports automatic semantic enrichment
Amazon OpenSearch Service now brings automatic semantic enrichment to managed clusters, matching the capability we launched for OpenSearch Serverless earlier this year. This feature allows you to leverage the power of semantic search with minimal configuration effort. Traditional lexical search only matches exact phrases, often missing relevant content. Automatic semantic enrichment understands context and meaning, delivering more relevant results. For example, a search for "eco-friendly transportation options" finds matches about "electric vehicles" or "public transportation"—even when these exact terms aren't present. This new capability handles all semantic processing automatically, eliminating the need to manage machine learning models. It supports both English-only and multi-lingual variants, covering 15 languages including Arabic, French, Hindi, Japanese, Korean, and more. You pay only for actual usage during data ingestion, billed as OpenSearch Compute Unit (OCU) - Semantic Search. View the pricing page for cost details and a pricing example. This feature is now available for Amazon OpenSearch Service domains running OpenSearch version 2.19 or later. Currently, this feature supports non-VPC domains in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). Get started with our documentation on automatic semantic enrichment.
aws.amazon.com
December 5, 2025 at 7:40 PM
🆕 Amazon Connect Customer Profiles beta adds Spark SQL-powered segmentation, enabling sophisticated customer segments using AI, SQL, and complete profile data for precise targeting and personalized experiences.

#AWS #AmazonConnect
Amazon Connect Customer Profiles launches new segmentation capabilities (Beta)
Amazon Connect Customer Profiles now offers new segmentation capabilities powered by Spark SQL (Beta), enabling you to build sophisticated customer segments using your complete Customer Profiles data with AI assistance. You can: Access complete profile data: Use both custom objects and standard objects for segmentation Leverage SQL capabilities: Join objects, filter with statistical functions like percentiles, and standardize date fields for complex analysis Build segments with AI assistance: Use natural language prompts with the Segment AI assistant to automatically generate segment definitions in Spark SQL, or write SQL directly Validate before deployment: Review AI-generated SQL, view natural language explanations, and get automatic segment estimates For example, you can create segments like "customers who called customer services more than 3 times in the past month about new purchases they made" or "high-value customers in the 90th percentile of lifetime spend" to enable precise targeting for outbound campaigns and personalized customer experiences. These new segmentation capabilities are offered alongside existing segmentation features. Both integrate seamlessly with segment membership calls, Flow blocks, and Outbound Campaigns, allowing you to choose the approach that best fits your use case. Getting started: Enable Data store from the Customer Profiles page to use the new segmentation capabilities Availability: Available in all AWS regions where Amazon Connect Customer Profiles is offered. For more information, see Build customer segments in Amazon Connect in the Amazon Connect Administrator Guide.
aws.amazon.com
December 5, 2025 at 5:40 PM
🆕 Amazon Bedrock now supports OpenAI's API for async inference, tool integration, and stateful chats via a URL change. Mantle engine enhances model performance and eases onboarding. Available for 20B/120B GPT models, with more support on the way.

#AWS #AmazonBedrock
Amazon Bedrock now supports Responses API from OpenAI
Amazon Bedrock now supports Responses API on new OpenAI API-compatible service endpoints. Responses API enables developers to achieve asynchronous inference for long-running inference workloads, simplifies tool use integration for agentic workflows, and also supports stateful conversation management. Instead of requiring developers to pass the entire conversation history with each request, Responses API enables them to automatically rebuild context without manual history management. These new service endpoints support both streaming and non-streaming modes, enable reasoning effort support within Chat Completions API, and require only a base URL change for developers to integrate within existing codebases with OpenAI SDK compatibility. Chat Completions with reasoning effort support is available for all Amazon Bedrock models that are powered by Mantle, a new distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Mantle simplifies and expedites onboarding of new models onto Amazon Bedrock, provides highly performant and reliable serverless inference with sophisticated quality of service controls, unlocks higher default customer quotas with automated capacity management and unified pools, and provides out-of-the-box compatibility with OpenAI API specifications. Responses API support is available today starting with OpenAI's GPT OSS 20B/120B models, with support for other models coming soon. To get started, visit the service documentation here
aws.amazon.com
December 4, 2025 at 6:40 PM
🆕 New Amazon EC2 M9g instances with AWS Graviton5 processors offer up to 25% better performance, higher networking, and EBS bandwidth. Ideal for databases, web apps, and ML workloads. Available in preview. Learn more and request access.

#AWS #AmazonEc2
Announcing new Amazon EC2 M9g instances powered by AWS Graviton5 processors (Preview)
Starting today, new general purpose Amazon Elastic Compute Cloud (Amazon EC2) M9g instances, powered by AWS Graviton5 processors, are available in preview. AWS Graviton5 is the latest in the Graviton family of processors that are custom designed by AWS to provide the best price performance for workloads in Amazon EC2. These instances offer up to 25% better compute performance, and higher networking and Amazon Elastic Block Store (Amazon EBS) bandwidth than AWS Graviton4-based M8g instances. They are up to 30% faster for databases, up to 35% faster web applications, and up to 35% faster for machine learning workloads compared to M8g. M9g instances are built on the AWS Nitro System, a collection of hardware and software innovations designed by AWS. The AWS Nitro System enables the delivery of efficient, flexible, and secure cloud services with isolated multitenancy, private networking, and fast local storage. Amazon EC2 M9g instances are ideal for workloads such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. To learn more or request access to the M9g preview, see Amazon EC2 M9g instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page.
aws.amazon.com
December 4, 2025 at 5:40 PM
🆕 Amazon SageMaker HyperPod introduces elastic training, scaling workloads automatically to optimize resource use, save time, reduce costs, and accelerate model training with minimal manual intervention and code changes. Available in all HyperPod regions.

#AWS
Introducing elastic training on Amazon SageMaker HyperPod
Amazon SageMaker HyperPod now supports elastic training, enabling organizations to accelerate foundation model training by automatically scaling training workloads based on resource availability and workload priorities. This represents a fundamental shift from training with a fixed set of resources, as it saves hours of engineering time spent reconfiguring training jobs based on compute availability. Any change in compute availability previously required manually halting training, reconfiguring training parameters, and restarting jobs—a process that requires distributed training expertise and leaves expensive AI accelerators sitting idle during training job reconfiguration. Elastic training automatically expands training jobs to absorb idle AI accelerators and seamlessly contracting when higher-priority workloads need resources—all without halting training entirely. By eliminating manual reconfiguration overhead and ensuring continuous utilization of available compute, elastic training can help save time previously spent on infrastructure management, reduce costs by maximizing cluster utilization, and accelerate time-to-market. Training can start immediately with minimal resources and grow opportunistically as capacity becomes available. SageMaker HyperPod is available in all regions where Amazon SageMaker HyperPod is currently available. Organizations can enable elastic training with zero code changes using HyperPod recipes for publicly available models including Llama and GPT OSS. For custom model architectures, customers can integrate elastic training capabilities through lightweight configuration updates and minimal code modifications, making it accessible to teams without requiring distributed systems expertise. To get started, visit the Amazon SageMaker HyperPod product page and see the elastic training documentation for implementation guidance.
aws.amazon.com
December 3, 2025 at 5:41 PM
🆕 AWS unveils TypeScript support in Strands Agents SDK preview, adding edge device support, steering, and evaluations. Choose Python or TypeScript for AI agents. Open source SDK now supports streaming, local models, and modular prompting. GitHub for details.

#AWS #AmazonBedrock
Announcing TypeScript support in Strands Agents (preview) and more
In May, we open sourced the Strands Agents SDK, an open source python framework that takes a model-driven approach to building and running AI agents in just a few lines of code. Today, we’re announcing that TypeScript support is available in preview. Now, developers can choose between Python and TypeScript for building Strands Agents. TypeScript support in Strands has been designed to provide an idiomatic TypeScript experience with full type safety, async/await support, and modern JavaScript/TypeScript patterns. Strands can be easily run in client applications, in browsers, and server-side applications in runtimes like AWS Lambda and Bedrock AgentCore. Developers can also build their entire stack in Typescript using the AWS CDK. We’re also announcing three additional updates for the Strands SDK. First, edge device support for Strands Agents is generally available, extending the SDK with bidirectional streaming and additional local model providers like llama.cpp that let you run agents on small-scale devices using local models. Second, Strands steering is now available as an experimental feature, giving developers a modular prompting mechanism that provides feedback to the agent at the right moment in its lifecycle, steering agents toward a desired outcome without rigid workflows. Finally, Strands evaluations is available in preview. Evaluations gives developers the ability to systematically validate agent behavior, measure improvements, and deploy with confidence during development cycles. Head to the Strands Agents GitHub to get started building.
aws.amazon.com
December 3, 2025 at 5:41 PM
🆕 Amazon Bedrock now supports reinforcement fine-tuning, boosting model accuracy by 66% on average. It automates the process, enabling developers to customize models without deep expertise or large datasets, using small prompts and feedback, all within AWS's secure environment.

#AWS #AmazonBedrock
Amazon Bedrock now supports reinforcement fine-tuning delivering 66% accuracy gains on average over base models
Amazon Bedrock now supports reinforcement fine-tuning, helping you improve model accuracy without needing deep machine learning expertise or large sums of labeled data. Amazon Bedrock automates the reinforcement fine-tuning workflow, making this advanced model customization technique accessible to everyday developers. Models learn to align with your specific requirements using a small set of prompts rather than the large sums of data needed for traditional fine-tuning methods, enabling teams to get started quickly. This capability teaches models through feedback on multiple possible responses to the same prompt, improving their judgement of what makes a good response. Reinforcement fine-tuning in Amazon Bedrock delivers 66% accuracy gains on average over base models so you can use smaller, faster, and more cost-effective model variants while maintaining high quality. Organizations struggle to adapt AI models to their unique business needs, forcing them to choose between generic models with average performance or expensive, complex customization that requires specialized talent, infrastructure, and risky data movement. Reinforcement fine-tuning in Amazon Bedrock removes this complexity by making advanced model customization fast, automated, and secure. You can train models by uploading training data directly from your computer or choose from datasets already stored in Amazon S3, eliminating the need for any labeled datasets. You can define reward functions using verifiable rule-based graders or AI-based judges along with built-in templates to optimize your models for both objective tasks such as code generation or math reasoning, and subjective tasks such as instruction following or chatbot interactions. Your proprietary data never leaves AWS's secure, governed environment during the entire customization process, mitigating security and compliance concerns. You can get started with reinforcement fine-tuning in Amazon Bedrock through the Amazon Bedrock console and via the Amazon Bedrock APIs. At launch, you can use reinforcement fine-tuning with Amazon Nova 2 Lite with support for additional models coming soon. To learn more about reinforcement fine-tuning in Amazon Bedrock, read the launch blog, pricing page, and documentation.
aws.amazon.com
December 3, 2025 at 5:41 PM
🆕 AWS now lets developers customize serverless models in Amazon SageMaker AI, speeding up model tuning with supervised learning and reinforcement. Simplifies workflow, accelerates deployment. Available in select regions. Join the waitlist for AI agent-guided workflow.

#AWS #AmazonSagemaker
New serverless model customization capability in Amazon SageMaker AI
Amazon Web Services (AWS) announces a new serverless model customization capability that empowers AI developers to quickly customize popular models with supervised fine-tuning and the latest techniques like reinforcement learning. Amazon SageMaker AI is a fully managed service that brings together a broad set of tools to enable high-performance, low-cost AI model development for any use case.  Many AI developers seek to customize models with proprietary data for improved accuracy, but this often requires lengthy iteration cycles. For example, AI developers must define a use case and prepare data, select a model and customization technique, train the model, then evaluate the model for deployment. Now AI developers can simplify the end-to-end model customization workflow, from data preparation to evaluation and deployment, and accelerate the process. With an easy-to-use interface, AI developers can quickly get started and customize popular models, including Amazon Nova, Llama, Qwen, DeepSeek, and GPT-OSS, with their own data. They can use supervised fine-tuning and the latest customization techniques such as reinforcement learning and direct preference optimization. In addition, AI developers can use the AI agent-guided workflow (in preview), and use natural language to generate synthetic data, analyze data quality, and handle model training and evaluation—all entirely serverless.  You can use this easy-to-use interface in the following AWS Regions: Europe (Ireland), US East (N. Virginia), Asia Pacific (Tokyo), and US West (Oregon). To join the waitlist to access the AI agent-guided workflow, visit the sign-up page.  To learn more, visit the SageMaker AI model customization page and blog.
aws.amazon.com
December 3, 2025 at 5:40 PM
🆕 Amazon SageMaker HyperPod cuts checkpointless training recovery from hours to minutes, saving on AI accelerator costs. Available globally, it boosts 95% training efficiency with no code changes for common models and minor tweaks for custom ones.

#AWS
Amazon SageMaker HyperPod now supports checkpointless training
Amazon SageMaker HyperPod now supports checkpointless training, a new foundational model training capability that mitigates the need for a checkpoint-based job-level restart for fault recovery. Checkpointless training maintains forward training momentum despite failures, reducing recovery time from hours to minutes. This represents a fundamental shift from traditional checkpoint-based recovery, where failures require pausing the entire training cluster, diagnosing issues manually, and restoring from saved checkpoints, a process that can leave expensive AI accelerators idle for hours, costing your organization wasted compute. Checkpointless training transforms this paradigm by preserving the model training state across the distributed cluster, automatically swapping out faulty training nodes on the fly and using peer-to-peer state transfer from healthy accelerators for failure recovery. By mitigating checkpoint dependencies during recovery, checkpointless training can help your organization save on idle AI accelerator costs and accelerate time. Even at larger scales, checkpointless training on Amazon SageMaker HyperPod enables upwards of 95% training goodput on cluster sizes with thousands of AI accelerators. Checkpointless training on SageMaker HyperPod is available in all AWS Regions where Amazon SageMaker HyperPod is currently available. You can enable checkpointless training with zero code changes using HyperPod recipes for popular publicly available models such as Llama and GPT OSS. For custom model architectures, you can integrate checkpointless training components with minimal modifications for PyTorch-based workflows, making it accessible to your teams regardless of their distributed training expertise. To get started, visit the Amazon SageMaker HyperPod product page and see the checkpointless training GitHub page for implementation guidance.
aws.amazon.com
December 3, 2025 at 5:40 PM
🆕 AWS launches X8aedz memory-optimized EC2 instances with 5GHz AMD EPYC processors, offering 2x better compute, ideal for EDA and databases, available in US West and Asia Pacific. Purchase via Savings Plans, On-Demand, or Spot.

#AWS #AmazonEc2
Announcing new memory optimized Amazon EC2 X8aedz instances
AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. They deliver up to 2x higher compute performance compared to previous generation X2iezn instances. X8aedz instances are built using the latest sixth generation AWS Nitro Cards and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) regions. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page or AWS news blog.
aws.amazon.com
December 2, 2025 at 8:40 PM
🆕 Amazon Bedrock AgentCore Runtime supports bi-directional streaming for real-time conversations, enhancing interactions. Available in nine AWS regions, it simplifies agent dev with consumption-based pricing.

#AWS #AmazonBedrock
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming, enabling real-time conversations where agents listen and respond simultaneously while handling interruptions and context changes mid-conversation. This feature eliminates conversational friction by enabling continuous, two-way communication where context is preserved throughout the interaction. Traditional agents require users to wait for them to finish responding before providing clarification or corrections, creating stop-start interactions that break conversational flow and feel unnatural, especially in voice applications. Bi-directional streaming addresses this limitation by enabling continuous context handling, helping power voice agents that deliver natural conversational experiences where users can interrupt, clarify, or change direction mid-conversation, while also enhancing text-based interactions through improved responsiveness. Built into AgentCore Runtime, this feature eliminates months of engineering effort required to build real-time streaming capabilities, so developers can focus on building innovative agent experiences rather than managing complex streaming infrastructure. This feature is available in all nine AWS Regions where Amazon Bedrock AgentCore Runtime is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more about AgentCore Runtime bi-directional streaming, read the blog, visit the AgentCore documentation and get started with the AgentCore Starter Toolkit. With AgentCore Runtime's consumption-based pricing, you only pay for active resources consumed during agent execution, with no charges for idle time or upfront costs.
aws.amazon.com
December 2, 2025 at 7:40 PM
🆕 Amazon CloudWatch GenAI now supports AgentCore Evaluations for automated AI agent quality assessment, offering 13 pre-built evaluators and custom scoring, with unified metrics and end-to-end tracing in CloudWatch dashboards. Available in four regions.

#AWS #AmazonCloudwatch
Amazon CloudWatch GenAI observability now supports Amazon AgentCore Evaluations
Amazon CloudWatch now enables automated quality assessment of AI agents through AgentCore Evaluations. This new capability helps developers continuously monitor and improve agent performance based on real-world interactions, allowing teams to identify and address quality issues before they impact customers. AgentCore Evaluations comes with 13 pre-built evaluators covering essential quality dimensions like helpfulness, tool selection, and response accuracy, while also supporting custom model-based scoring systems. You can access unified quality metrics and agent telemetry in CloudWatch dashboards, with end-to-end tracing capabilities to correlate evaluation metrics with prompts and logs. The feature integrates seamlessly with CloudWatch's existing capabilities including Application Signals, Alarms, Sensitive Data Protection, and Logs Insights. This capability eliminates the need for teams to build and maintain custom evaluation infrastructure, accelerating the deployment of high-quality AI agents. Developers can monitor their entire agent fleet through the AgentCore section in the CloudWatch GenAI observability console. AgentCore Evaluations is now available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney). To get started, visit the documentation and pricing details. Standard CloudWatch pricing applies for underlying telemetry data.
aws.amazon.com
December 2, 2025 at 7:40 PM
🆕 AWS previews M4 Max Mac instances, powered by Mac Studio, offering 16-core CPU, 40-core GPU, and 128GB memory for Apple developers to build and test iOS, macOS, and more. Ideal for demanding workloads. Request access on the Amazon EC2 Mac page.

#AWS #AmazonEc2
Announcing Amazon EC2 M4 Max Mac instances (Preview)
Amazon Web Services announces preview of Amazon EC2 M4 Max Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M4 Max Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to migrate their most demanding build and test workloads onto AWS. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. M4 Max Mac instances are powered by the AWS Nitro System, providing up to 10 Gbps network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth. These instances are built on Apple M4 Max Mac Studio computers featuring a 16-core CPU, 40-core GPU, 16-core Neural Engine, and 128GB of unified memory. Compared to EC2 M4 Pro Mac instances, M4 Max instances offer twice the GPU cores and more than 2.5x the unified memory, offering customers more choice to match instance capabilities to their specific workload requirements and further expanding the selection of Apple silicon Mac hardware on AWS. To learn more or request access to the Amazon EC2 M4 Max Mac instances preview, visit the Amazon EC2 Mac page.
aws.amazon.com
December 2, 2025 at 7:40 PM
🆕 Amazon S3 Tables now have Intelligent-Tiering, optimizing costs by automatically moving data across three tiers based on access, reducing costs up to 80% without performance impact. Available everywhere S3 Tables are. For pricing, check the Amazon S3 pricing page.

#AWS #AmazonS3
Amazon S3 Tables now offer the Intelligent-Tiering storage class
Amazon S3 Tables now offer the Intelligent-Tiering storage class, which optimizes costs based on access patterns, without performance impact or operational overhead. Intelligent-Tiering automatically transitions data in tables across three low-latency access tiers as access patterns change, reducing storage costs by up to 80%. Additionally, S3 Tables automated maintenance operations such as compaction, snapshot expiration, and unreferenced file removal never tier up your data. This helps you to keep your tables optimized while saving on storage costs. With the Intelligent-Tiering storage class, data in tables not accessed for 30 consecutive days automatically transitions to the Infrequent Access tier (40% lower cost than the Frequent Access tier). After 90 days without access, that data transitions to the Archive Instant Access tier (68% lower cost than the Infrequent Access tier). You can now select Intelligent-Tiering as the storage class when you create a table or set it as the default for all new tables in a table bucket. The Intelligent-Tiering storage class is available in all AWS Regions where S3 Tables are available. For pricing details, visit the Amazon S3 pricing page. To learn more about S3 Tables, visit the product page, documentation, and read the AWS News Blog.
aws.amazon.com
December 2, 2025 at 6:42 PM
🆕 Amazon SageMaker AI introduces serverless MLflow for faster AI model development, dynamically scaling to support tasks without infrastructure setup, enhancing productivity and reducing costs. Available at no extra charge in select regions.

#AWS #AmazonSagemaker
Amazon SageMaker AI announces serverless MLflow capability for faster AI development
Amazon SageMaker AI now offers a serverless MLflow capability that dynamically scales to support AI model development tasks. With MLflow, AI developers can begin tracking, comparing, and evaluating experiments without waiting for infrastructure setup. As customers across industries accelerate AI development, they require capabilities to track experiments, observe behavior, and evaluate the performance of AI models, applications and agents. However, managing MLflow infrastructure requires administrators to continuously maintain and scale tracking servers, make complex capacity planning decisions, and deploy separate instances for data isolation. This infrastructure burden diverts resources away from core AI development and creates bottlenecks that impact team productivity and cost effectiveness. With this update, MLflow now scales dynamically to deliver fast performance for demanding and unpredictable model development tasks, then scales down during idle time. Administrators can also enhance productivity by setting up cross-account access via Resource Access Manager (RAM) to simplify collaboration across organizational boundaries. The serverless MLflow capability on Amazon SageMaker AI is offered at no additional charge and works natively with familiar Amazon SageMaker AI model development capabilities like SageMaker AI JumpStart, SageMaker Model Registry and SageMaker Pipelines. Customers can access the latest version of MLflow on Amazon SageMaker AI with automatic version updates. Amazon SageMaker AI with MLflow is now available in select AWS Regions. To learn more, see the Amazon SageMaker AI user guide and the AWS News Blog.
aws.amazon.com
December 2, 2025 at 6:42 PM
🆕 AWS announces preview of X8i memory-optimized EC2 instances with custom Intel Xeon 6 processors, offering 1.5x more memory, 3.4x more bandwidth, and 46% higher SAPS for mission-critical workloads. Request access on the Amazon EC2 X8i page.

#AWS #AmazonEc2
Announcing Amazon EC2 Memory optimized X8i instances (Preview)
Amazon Web Services is announcing the preview of Amazon EC2 X8i, next-generation Memory optimized instances. X8i instances are powered by custom Intel Xeon 6 processors delivering the highest performance and fastest memory among comparable Intel processors in the cloud. X8i instances offer 1.5x more memory capacity (up to 6TB) , and up to 3.4x more memory bandwidth compared to previous generation X2i instances. X8i instances will be SAP-certified and deliver 46% higher SAPS compared to X2i instances, for mission-critical SAP workloads. X8i instances are a great choice for memory-intensive workloads, including in-memory databases and analytics, large-scale traditional databases, and Electronic Design Automation (EDA). X8i instances offer 35% higher performance than X2i instances with even higher gains for some workloads. To learn more or request access to the X8i instances preview, visit the Amazon EC2 X8i page.
aws.amazon.com
December 2, 2025 at 6:41 PM