KUBERNETES
Kubernetes 1.33 Release Adds Native Support for Container Sidecars

Kubernetes 1.33, released this week, introduces native support for container sidecars, a highly anticipated feature that simplifies orchestration of application and sidecar containers. This improvement, now marked as stable, eliminates the need for manual setup and enhances automation across deployments. The release includes 24 new alpha features, 18 stable upgrades, and 20 beta promotions, reflecting continuous progress in the platform’s evolution.
Other notable stable features include topology-aware traffic routing, granular pod placement controls, and enhanced volume pre-population. New alpha capabilities extend the Dynamic Resource Allocation API and add customizable user preferences, while key beta features boost security, Windows support, and OCI image volume usage.
Despite the pace of innovation, many IT teams face challenges keeping Kubernetes environments up to date, especially with the growing complexity of managing multi-cluster fleets. Version 1.33, codenamed Octarine: Color of Magic, represents a step forward in Kubernetes’ balance of innovation and stability.
TOGETHER WITH CLOUDZERO
New Free Benchmarking Tool: See How Your Cloud Costs Stack Up
How does your cloud spending compares to others?
Now you can find out with CloudZero’s new interactive benchmarking tool — no login required.
Get a clear view of how your cloud efficiency stacks up against industry peers in just a few clicks. No more guessing. See where you stand and take control of your cloud costs.
CLOUD ADOPTION
CNCF Survey Surfaces Steady Pace of Increased Cloud-Native Technology Adoption

A new CNCF survey of 689 IT professionals reveals that 80% of organizations have deployed Kubernetes in production, with another 13% actively testing it. The findings highlight a major shift toward cloud-native adoption, with 52% of respondents running most or all applications in containers—up from 39% last year.
The adoption of CI/CD platforms has surged, now used by 60% of respondents (a 31% YoY increase). GitHub Actions, Argo, and Jenkins lead the pack. GitOps is gaining momentum, with 77% reporting adoption. Helm remains the top Kubernetes package manager (75%), and projects like etcd, CoreDNS, Cert Manager, and Argo rank among the most used CNCF tools.
Cloud-native tech is now central to app development for 24% of respondents, with an average of 2,341 containers per organization, double that of 2023. Still, cultural challenges (46%), CI/CD complexity (40%), and lack of training (38%) remain barriers.
The survey also underscores rising AI integration, with Kubernetes increasingly hosting stateful AI workloads. Organizations are split between on-premises and public cloud deployments (both at 59%), with multi-cloud use on the rise.
This surge signals a watershed moment in cloud-native adoption. The next frontier: managing growing complexity and scale.
TELEMETRY DATA
Grafana Labs Extends Observability Reach Deeper into Kubernetes

Grafana Labs has expanded its Fleet Management platform to better support Kubernetes environments, streamlining the way IT teams manage telemetry data collection. The latest update includes Kubernetes Monitoring Helm chart 2.0, which allows integration with Grafana Alloy, Node Exporter, OpenCost, and Kepler, along with support for multi-destination telemetry and built-in connections to key systems like cert-manager, MySQL, and Fleet Manager.
New features in Grafana Cloud Kubernetes Monitoring include persistent volume tracking, resource usage monitoring down to the pod level, and improved search capabilities to pinpoint contextual data across Kubernetes clusters and workloads.
The platform embraces a schema-less, open-source-first approach, leveraging Prometheus and OpenTelemetry to reduce storage and data curation burdens. Adoption of both tools continues to rise, with 76% of organizations using open-source observability solutions, and 70% deploying both Prometheus and OpenTelemetry.
Despite the benefits, challenges remain: organizations cite complexity, signal-to-noise ratio, and cost as top concerns. Many still opt to overstore telemetry data, erring on the side of caution for future analytics, compliance, or security needs.
📺 PODCAST
How to Make Engineers apply FinOps
Another expert sharing his insights on FinOps Weekly Podcast!
How does engineering play a role in the world of FinOps? We discuss with Jeremy Chaplin about team collaboration, cost-reduction incentives, and the importance of a common language.
JAVA
Oracle releases FIPS-validated crypto module for Java

Oracle has introduced Jipher, a new Java cryptographic service provider that integrates a FIPS 140-2 validated OpenSSL module. Designed for use with the Java Cryptography Architecture (JCA), Jipher allows Java applications to operate in FIPS-regulated environments, addressing security requirements set by NIST.
Available as a JAR file, Jipher supports Oracle JDK 17/21 and GraalVM, and is accessible through Java SE tools and Oracle Support. Unlike existing JDK crypto providers, Jipher meets FIPS 140-2 compliance, making it suitable for high-security and regulated deployments.
This release aligns with Oracle’s broader commitment to modern cryptographic standards. It follows the addition of post-quantum algorithms in JDK 24, reinforcing Java's preparedness for future security challenges.
MLOps
MLOps in the Cloud-Native Era — Scaling AI/ML Workloads with Kubernetes and Serverless Architectures

Machine learning (ML) is now foundational to enterprise innovation—but deploying and scaling ML in production remains a challenge. Cloud-native MLOps offers a powerful solution by combining Kubernetes and serverless computing to streamline model deployment, automate ML pipelines, and optimize infrastructure use.
Kubernetes supports containerized ML pipelines, dynamic autoscaling, GPU scheduling, and seamless integration with MLOps tools like Kubeflow, MLflow, and Argo Workflows. For event-driven and cost-sensitive inference, serverless platforms like AWS Lambda and Google Cloud Functions offer lightweight, on-demand execution without infrastructure overhead.
A hybrid Kubernetes + serverless approach balances high-performance training with cost-effective inference. Key best practices include continuous monitoring, GitOps-driven CI/CD, and efficient resource management.
Cloud-native MLOps is enabling faster, more reliable, and scalable AI operations—key for any enterprise seeking to unlock AI’s full potential.
MORE NEWS
What’s New in CORA
CORA version v0.0.7 introduces support for Idle Recommendations, helping users identify and address underutilized resources more effectively. Additionally, a Resource ID filter has been added to the Usage Optimization tab, offering more granular control when analyzing cloud usage data.
Previous updates included minor fixes (v0.0.6) and the initial rollout of core functionality (v0.0.5), making this the most feature-rich release to date.
Join the FinOps Weekly Community
We just launched a community and we are looking forward to have you with us!
An open space for making FinOps for Everyone a reality
Professional Spotlight
Guillermo Ruiz

Sr. Specialist Solutions Architect, Efficient Compute at Amazon Web Services (AWS)
That’s all for this week. See you next Sunday!
Before You Go, Here’s How We Can Collaborate
Master FinOps with us: Learn about our Mastering FinOps Courses, taught by FinOps Professionals like the author Alfonso San Miguel.
Sponsor this newsletter: Promote your company in this newsletter and reach the Cloud audience that wants to stay up to date in Cloud.
Collaborate with SmartClouds: Our brand expands to more than just newsletters. Podcasts, Posts, Webinars, Events, and any collaboration related to Cloud are available.