cloud native Archives - SD Times https://sdtimes.com/tag/cloud-native/ Software Development News Fri, 08 Mar 2024 17:40:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg cloud native Archives - SD Times https://sdtimes.com/tag/cloud-native/ 32 32 Navigating Kubernetes: The common mistake newcomers make https://sdtimes.com/kubernetes/navigating-kubernetes-the-common-mistake-newcomers-make/ Fri, 08 Mar 2024 17:40:11 +0000 https://sdtimes.com/?p=53978 With so many newcomers to the cloud-native computing space it’s only to be expected that an ecosystem of certifications and accreditations has cropped up around Kubernetes over the years. And as demand for K8s expertise continues to grow, so does the number of professionals seeking out these certifications and accreditations.  In fact, you’d be hard-pressed … continue reading

The post Navigating Kubernetes: The common mistake newcomers make appeared first on SD Times.

]]>
With so many newcomers to the cloud-native computing space it’s only to be expected that an ecosystem of certifications and accreditations has cropped up around Kubernetes over the years. And as demand for K8s expertise continues to grow, so does the number of professionals seeking out these certifications and accreditations. 

In fact, you’d be hard-pressed to find an open-source software project that has seen more rapid developer adoption in the past decade. However, K8s’ ascent to ubiquity hasn’t been without its challenges. And as one might imagine, with such a sudden influx of newly minted developers, K8s adoption has come with some growing pains. 

Its widespread appropriation has transformed how developers deploy and manage applications. However, the language continues to suffer from some common misconceptions — especially that it is overly complex and unwieldy in production, and not enterprise ready. 

In fact, in a 2021 survey of IT professionals, a stunning 100% of respondents whose organizations are using or planning to use Kubernetes said that they were facing challenges with adoption, with the most commonly cited challenges being a lack of IT resources, difficulty scaling, and struggling to keep up with advancements in underlying technologies. However, what these findings fail to recognize is that much of these perceived shortcomings and challenges are not necessarily inherent to Kubernetes itself. Instead, as we’ll outline below, many of these challenges stem from some fundamental misunderstandings of how to approach and conceptualize the K8s system. 

The #1 mistake

The single most widespread and detrimental mistake developers make when working in Kubernetes is actually one of mindset — the all-too-common mistake of bringing monolithic logic into the cloud-native space. Kubernetes offers a plethora of abstractions and resources designed for the cloud-native ecosystem. Failing to leverage these resources appropriately can lead to scalability issues, maintenance challenges, and inefficient application and database deployments. This misconception can also foster the belief that Kubernetes and the cloud-native paradigm are unsuitable for enterprise use, while in reality, it highlights the necessity of adapting application architectures to harness the full power of containers and orchestration.

To remedy this common mistake, teams should embed strong architecture expertise into their development processes. Having engineers with cloud-native experience can guide teams to success and help them avoid common pitfalls. This approach emphasizes the importance of understanding and adapting to the unique characteristics of Kubernetes and cloud-native development.

Cloud-native thinking

The best way to avoid future mistakes is to cultivate cloud-native thinking and experience within development teams. Encourage continuous learning and training on Kubernetes best practices, and encourage teams to participate in the K8s community. Promote a culture of collaboration and knowledge-sharing, allowing team members to benefit from each other’s experiences and insights. Regularly reassess and update development practices to align with the evolving Kubernetes landscape.

Establishing high standards and quality control measures is also essential for successful Kubernetes development. Platform teams should enforce rigorous standards for anything deployed in Kubernetes, leading to higher availability, improved security, and enhanced performance. Operators can be valuable tools in this regard, automating the deployment of applications with best practices right out of the box.

Microservices is an architectural style that is increasingly adopted by software development teams. The shift from monolithic to a collection of small autonomous services is a good first step towards cloud native. Microservices architecture offers various benefits such as flexibility in using different technologies, ease of understanding, adaptability, and scalability. 

And with growing interest in running databases on Kubernetes, this becomes even more challenging. It’s essential that businesses demand enterprise-grade functionality in operators, ensuring that databases on Kubernetes are deployed using modern and efficient approaches.

By understanding the most common pitfalls and looking to more experienced developers, newcomers can adopt best practices, embed strong architecture expertise, set high standards, and leverage modern approaches to fully harness the power of Kubernetes in the cloud-native ecosystem — ensuring a smoother journey into the world of Kubernetes development, and paving the way for more scalable, efficient, and secure applications.

 

The post Navigating Kubernetes: The common mistake newcomers make appeared first on SD Times.

]]>
Red Hat releases Red Hat Device Edge, OpenShift 4.14, and donates new Backstage plugins to open-source community https://sdtimes.com/softwaredev/red-hat-releases-red-hat-device-edge-openshift-4-14-and-donates-new-backstage-plugins-to-open-source-community/ Mon, 06 Nov 2023 17:58:20 +0000 https://sdtimes.com/?p=52946 Today at KubeCon + CloudNativeCon North America 2023, Red Hat announced a number of updates to its portfolio. First, the company announced the general availability of Red Hat Device Edge, which was created to provide a platform for deploying devices at the edge. It includes an operating system optimized for the edge and a supported … continue reading

The post Red Hat releases Red Hat Device Edge, OpenShift 4.14, and donates new Backstage plugins to open-source community appeared first on SD Times.

]]>
Today at KubeCon + CloudNativeCon North America 2023, Red Hat announced a number of updates to its portfolio.

First, the company announced the general availability of Red Hat Device Edge, which was created to provide a platform for deploying devices at the edge. It includes an operating system optimized for the edge and a supported distribution of the lightweight Kubernetes project MicroShift, providing customers with two deployment options.

According to Red Hat, other benefits include a minimal footprint, a consistent operational experience, workload flexibility, and simplified deployment.  

Next, it released Red Hat OpenShift 4.14. The latest version includes the general availability of hosted control planes, which reduces management costs, improves cluster provisioning time, helps overcome limitations due to cluster scale, and decouples control planes from workloads for greater security. Red Hat claims that hosted control planes can save 30% in infrastructure costs and 60% in developer time. 

Other capabilities include the ability to run virtual machines and containers side by side using Red Hat OpenShift Virtualization, support for NVIDIA GPU accelerators, and the availability of Red Hat OpenShift Dedicated on Google Cloud Marketplace. 

The company also revealed it has donated five new plugins to Backstage, which is a framework for building developer portals. The technologies that correspond to the new plugins include Azure Container Registry, JFrog Artifactory, Kiali, Nexus, and 3scale. 

This isn’t the first time Red Hat has contributed to the Backstage community. In 2022, the company first joined that community and then donated five plugins back in May of this year. Those plugins include Application Topology for Kubernetes, Multi Cluster View with Open Cluster Management, Container Image Registry for Quay, Pipelines with Tekton, and Authentication and Authorization with Keycloak. 

“We believe the future of developer productivity depends on the continued evolution and innovation of projects like Backstage, and we’re focused on making this future a reality through contributions that help simplify, extend and accelerate the development process,” said Balaji Sivasubramanian, senior director of Developer Tools Product Management at Red Hat. “Donating these plug-ins to the Backstage community is a reflection of Red Hat’s commitment to helping developers meet the demands of today as they innovate for tomorrow.”

 Finally, Red Hat launched Ansible Inside, which allows developers to embed Ansible Playbooks inside their applications. According to the company, this offering was built for customers who want to embed automation in their applications, but don’t require all of the capabilities offered by Ansible Automation Platform.

 

The post Red Hat releases Red Hat Device Edge, OpenShift 4.14, and donates new Backstage plugins to open-source community appeared first on SD Times.

]]>
Crossplane 1.14 released with platform engineering in mind https://sdtimes.com/cloud/crossplane-1-14-released-with-platform-engineering-in-mind/ Wed, 01 Nov 2023 20:14:23 +0000 https://sdtimes.com/?p=52907 The team behind Crossplane has announced the release of the latest version of the framework for building control planes.  According to the project maintainers, Crossplane 1.14 is the biggest release of the project so far and introduces several new features that are targeted at benefiting platform engineers. The CLI was updated with several new commands … continue reading

The post Crossplane 1.14 released with platform engineering in mind appeared first on SD Times.

]]>
The team behind Crossplane has announced the release of the latest version of the framework for building control planes. 

According to the project maintainers, Crossplane 1.14 is the biggest release of the project so far and introduces several new features that are targeted at benefiting platform engineers. The CLI was updated with several new commands that are useful in creating and managing control planes, such as “init” to initialize a new project, “build” and “push” to package and distribute to a registry, “install” to deploy the package into a control plane, “render” to test composition logic, and “trace” to examine live resources, which is helpful in root cause analysis. 

Of these, the maintainers believe “render” and “trace” are the most significant of these new commands. They explained that prior to this release there wasn’t a ton of support for testing compositions before they were deployed into live clusters. “Render” changes this by allowing developers to view the compositions they are working on, enabling them to verify if they are right before proceeding. The “trace” command also helps with the troubleshooting process because developers can investigate specific resources. 

Also in this release is a beta of Composition Functions, which allows developers to create custom logic using whatever language they want. The project also now comes with a few generic Functions, which eliminates the need to write code for those. 

“An entire ecosystem of reusable Functions will be available in the Upbound Marketplace that will address common scenarios not previously possible with traditional composition based on patch and transform abilities. This flexibility of writing your custom logic in a language of your choice or reusing general Functions from the ecosystem will unlock a wealth of new scenarios for people building control planes with Crossplane,” Jared Watts, co-creator, maintainer, and steering committee member of Crossplane, wrote in a blog post

This release also introduces the “Usage” API, which allows developers to declare dependency relationships between resources. The reason behind this is that sometimes when Crossplane cleans up resources, it may not get to all of them, resulting in “orphaned resources” being left behind. This happens when a dependent resource is deleted before the resource it depends on, leaving Crossplane unable to delete the remaining one. With the “Usage” functionality, the new dependency relationship will reason over the original deletion rules and prevent resources that are a dependency from being deleted. 

The next major release of Crossplane is expected in January 2024 and will include even more investments in developer experience that will improve the methods for building control planes.

 

The post Crossplane 1.14 released with platform engineering in mind appeared first on SD Times.

]]>
Lightbend introduces new version of Akka designed for seamless integration between cloud and edge deployments https://sdtimes.com/softwaredev/lightbend-introduces-new-version-of-akka-designed-for-seamless-integration-between-cloud-and-edge-deployments/ Tue, 31 Oct 2023 19:04:18 +0000 https://sdtimes.com/?p=52885 Lightbend has announced the latest version of the Akka platform, which is a platform for developing concurrent, distributed applications. With the introduction of Akka Edge, developers will be able to unify applications across cloud and edge environments.  Akka Edge allows developers to build something once and then have it work across multiple environments. It keeps … continue reading

The post Lightbend introduces new version of Akka designed for seamless integration between cloud and edge deployments appeared first on SD Times.

]]>
Lightbend has announced the latest version of the Akka platform, which is a platform for developing concurrent, distributed applications. With the introduction of Akka Edge, developers will be able to unify applications across cloud and edge environments. 

Akka Edge allows developers to build something once and then have it work across multiple environments. It keeps code, tools, patterns, and communication the same regardless of where the application is living. 

“Where something will run—on-prem, cloud, edge, or device—should not dictate how it is designed, implemented, or deployed. The optimal location for a service at any specific moment might change and is highly dependent on how the application is being used and the location of its users. Instead, the guiding principles of Akka Edge evolve around data and service mobility, location transparency, self-organization, self-healing, and the promise of physical co-location of data, processing, and end-user—meaning that the correct data is always where it needs to be, for the required duration, nothing less or longer, even as the user moves physically in space,” Jonas Bonér, CEO and founder of Lightbend, wrote in a blog post

It uses gRPC projecting to allow for asynchronous service-to-service communication. It also has active entity migration that can be defined programmatically, as well as temporal, geographic, and use-based migration capabilities. 

The company also introduced several new features that enable Akka applications to run more efficiently in environments with limited resources, which is common at the edge. These include support for GraalVM native images and lightweight Kubernetes distributions, support for multidimensional autoscaling, and lightweight storage at the edge. 

Other new features include Active/Active digital twins, easier methods for network segregation, and placing more of an emphasis on business logic and flow over tool integrations. 

“As the line between cloud and edge environments continues to blur, Akka Edge brings industry-first capabilities to enable developers to build once for the Cloud and, when ready, deploy seamlessly to the Edge,”  Bonér added. 

 

The post Lightbend introduces new version of Akka designed for seamless integration between cloud and edge deployments appeared first on SD Times.

]]>
Rancher Labs and k3s creators launch new project, Acorn, for developing in cloud sandboxes https://sdtimes.com/softwaredev/rancher-labs-and-k3s-creators-launch-new-project-acorn-for-developing-in-cloud-sandboxes/ Wed, 25 Oct 2023 17:22:34 +0000 https://sdtimes.com/?p=52836 The creators of Rancher Labs and k3s are unveiling a new project: Acorn. Run under the company Acorn Labs and currently in beta, Acorn enables developers to create in a cloud sandbox and easily share their work with others.  According to the creators, the goal of this project is to make “cloud computing accessible, collaborative, … continue reading

The post Rancher Labs and k3s creators launch new project, Acorn, for developing in cloud sandboxes appeared first on SD Times.

]]>
The creators of Rancher Labs and k3s are unveiling a new project: Acorn. Run under the company Acorn Labs and currently in beta, Acorn enables developers to create in a cloud sandbox and easily share their work with others. 

According to the creators, the goal of this project is to make “cloud computing accessible, collaborative, and delightful for developers.”

The sandbox environment can be used for up to two hours at a time, and developers get access to 4 GB of RAM at any time. Once the two hours are up, the workloads are stopped, but developers can recreate them whenever they choose.

Developers can run multiple projects through Acorn. The Pro version offers collaboration features where developers can invite team members, who can then collaborate across multiple applications and environments. It also includes management tools where users can set role-based access control policies. 

Acorn also provides a set of DevOps tools to handle monitoring, logging, secret management, and cloud management. 

There is also a Dev Mode that enables users to work directly on an application while it is running. Changes can be synchronized in real time, debuggers can be added, and logs can be viewed in this mode. 

Projects created in Acorn are saved as Acorn Images, which are OCI-compliant and work with any registry. These images are identical wherever they are deployed, which cuts down errors and configuration issues. 

“Cloud computing has become increasingly complex for large organizations, let alone individual developers and small teams,” said Sheng Liang, CEO of Acorn. “With Acorn, we’ve eliminated that complexity. Users don’t need to be experts in Kubernetes, Terraform, DevOps or AWS to take advantage of the power of cloud computing. Acorn puts the power of the most popular cloud computing solutions at your fingertips. The only question is what you will create from then on.”

 

The post Rancher Labs and k3s creators launch new project, Acorn, for developing in cloud sandboxes appeared first on SD Times.

]]>
What makes WebAssembly special? The Component Model https://sdtimes.com/softwaredev/what-makes-webassembly-special/ Tue, 17 Oct 2023 16:20:37 +0000 https://sdtimes.com/?p=52649 WebAssembly (Wasm) began its journey in the web browser. However, it has since expanded, becoming a sought-after technology for server-side environments, Internet of Things (IoT) systems, and synchronous plugins. With this kind of horizontal expansion, even reaching into multi-tenanted cloud environments, it is clear that Wasm has some desirable attributes. One additional innovation, on the … continue reading

The post What makes WebAssembly special? The Component Model appeared first on SD Times.

]]>
WebAssembly (Wasm) began its journey in the web browser. However, it has since expanded, becoming a sought-after technology for server-side environments, Internet of Things (IoT) systems, and synchronous plugins. With this kind of horizontal expansion, even reaching into multi-tenanted cloud environments, it is clear that Wasm has some desirable attributes. One additional innovation, on the Wasm side, elevates the technology beyond just being desirable. Wasm has the potential to change the game for software developers, revolutionizing the landscape of software development.

What is this new thing that makes Wasm so exciting and special? It’s a technology that hides behind a deceptively boring name: The WebAssembly Component Model.

Before diving into the Component Model, though, let’s trace Wasm’s journey from the web browser to the cloud.

Why Wasm Moved Beyond the Web Browser

Wasm was developed for a very specific purpose: A consortium of developers from Mozilla, Apple, Microsoft, and Google wanted a vendor-neutral standardized way to run languages other than JavaScript inside of the web browser. As an industry, we have accrued software written in languages such as C, Java, and Python over many decades. In more recent times, newer languages like Rust continue to add to the vast array of software tools and libraries. However, web browsers, being restricted to a JavaScript runtime, are not able to execute code written in these high-level languages.

Wouldn’t it be great, reasoned the Wasm creators, if we could create a standard binary format to which any of these languages could compile? From this point onwards, we saw toolchain development that enabled high-level languages like C, Rust, and others to be compiled into WebAssembly binaries. These binaries could be loaded into web browsers and interacted with using JavaScript. This brought about a rich interplay with existing web technologies; suddenly, web developers could write JavaScript code that interfaced with these WebAssembly binaries, harnessing functionality and achieving near-native performance right there in the browser.

The web browser environment (for which Wasm was initially designed) does carry forward some constraints that any deliberately interoperable system (Wasm) must abide by:

Security: The web browser routinely runs code from unknown and untrusted sources. As we click around on the internet, we rely on the web browser to protect us from bad actors and buggy code. In the same vein, Wasm must be highly secure.

Portability: Web browsers run natively on all major operating systems and various system architectures. Wasm must not require that an application be compiled to a specific OS or architecture but must support running on many (ideally all) platforms without compromising performance. Wasm must ensure users have a smooth and efficient experience regardless of where the application runs.

Performance: As we browse the web, we grow impatient, waiting for things to load. Just a few extra moments can give us the feeling that our expected digital dopamine isn’t arriving on schedule. At this point, we often close that tab or move on, clicking, swiping, and liking something else. Wasm must always load and execute immediately to ensure that a user’s interest is retained.

In addition to the three constraints above, a fourth—highly audacious constraint— remains.  A way in which all of the disparate language communities (each with its timelines, processes, and prioritizations) can adopt Wasm into their build and runtime toolchains. Optimally, as soon as it is practicably possible.

By rights, Wasm should have failed simply because that fourth item mentioned above could easily be considered unrealistic (in the real world). Yet, against the odds, language communities began supporting Wasm. First, C and C++ gained support (from the Wasm creators themselves), as did the burgeoning Rust programming language. That very well may have been the stopping point. But it was not. Language after language has begun adding support. Python, Ruby, Swift, the .NET languages, Go, Zig… the list started growing and continues to grow. Even wholly new projects (like the functional programming language Grain, which compiles exclusively to Wasm) are building their community and undergoing ongoing development in the Wasm space. 

With this level of language support, Wasm consistently increases its foothold as a promising tool. Wasm’s security, portability, and performance virtues are making savvy developers outside of the web browser world begin to notice. This foothold gets stronger whilst stories of companies like BBC and Disney (using Wasm in their embedded streamed video apps) appear, other parts of the web like Samsung documentation pages go on to explain “WebAssembly and its application in Samsung Smart TVs.  Cloud innovators such as Fastly, Fermyon, and Microsoft continue to enhance Wasm tooling and frameworks, integrating Wasm seamlessly into cloud and edge computing. Companies like Shopify, Suborbital (now part of F5), and Dylibso making leveraging Wasm as a plugin framework a reality. All roads lead to refining the Wasm application developer experience and simplifying Wasm’s implementation in mainstream products and services. If we boil it down, in every case, the magic formula is the same: Wasm offers a secure, portable environment that performs well across devices. In addition, there is broad support from various language ecosystems.

Detractors might point out that this is mainly overwrought praise for boring features. Sure, it’s fine. One could argue that other solutions just might also be “fine”, right? More to the point, if the ambitions behind Wasm stopped here, then I would have to agree: Wasm is simply a “good enough” technology. But something has been brewing in the standards groups working on Wasm. And this “something” boosts Wasm from fine to redefining.

The Component Model is the Future

Here’s an intriguing question: If languages such as Rust can compile to Wasm and Python can operate within Wasm, could there be an architecture to build interoperable Wasm libraries, applications, and environments? If the answer is affirmative, we might be on the verge of realizing a goal that has largely eluded the programming world: 

creating libraries (programs with reusable code) that are universally usable, regardless of their source language. 

The audacious goal of the Wasm project was that, in theory, any language should be able to run in a Wasm runtime. And the surprising fact is that many languages (over two dozen) already can.

To put this in perspective, let’s consider some standard tasks typically encapsulated in libraries: parsing an XML or JSON file, formatting a date, or implementing a complex encryption scheme. For each language, its community writes this code in their preferred language. JavaScript has an XML parser; so does Rust, and every major language features an XML parser crafted for it. Each of these libraries needs updates, patches for security issues, and more. Consider the countless hours dedicated to maintaining myriad libraries, all of which ultimately perform the same essential functions. What really drives this point home for me is that RFC 7159 officially describes JSON as a language-independent data interchange format – you read correctly; “language-independent”.

The Component Model addresses the challenge of enabling code compiled to Wasm to intercommunicate. It facilitates communication between one Wasm component and another. This means a Python program can now import a library written in JavaScript, which can import a library written in Rust. Remarkably, the original programming language has no bearing on the usability of the code. Let’s put this into meaningful context. How does this change how we talk to computers? Standards like Unicode already allow us to represent our written human languages in 8-bit sequences. For instance, a UTF-8 encoded JSON string can be deserialized into bytes, raw data that computers can store, transmit, process, and stream. These bytes can subsequently be serialized back into a UTF-8 encoded JSON string. The good news is that many highlevel programming languages already support such standards (and have done so for quite some time). At this point, you might wonder how on earth we will be able to deal with different implementations of high-level language variables. Take strings, again, as an example. A string in C might be represented entirely differently from a string in Rust or a string in JavaScript [1]. In fact, there are two types of strings in Rust. Rust’s “String” variable, denoted by an upper case “S”, is stored as a vector of bytes and Rust’s other string type, denoted by lower-case letters “str”, which is prefixed by an ampersand, is stored as a slice [2]. Does this exacerbate the situation? No. Because The Wasm Component Model has an agreed-upon way of defining those richer types and an agreed-upon way of expressing them at module boundaries. These type definitions are written in a language called Wasm Interface Type (WIT), and the way they translate into bits and bytes is called the Canonical Application Binary Interface (ABI). It now becomes clear that components are portable across architectures, operating systems and languages. What do we stand to gain from this? Well, for one, we can stop reimplementing the same libraries in every language under the sun.

In addition, a library can be written in the language best suited for it and then shared to all other languages. For example, a high-performance cryptography library might best be written in Rust, where one could argue that built-in handling of null pointer dereferencing, dangling pointers, and buffer overflows might make Rust the safest tool for that particular task. Another example is that a library related to data processing might better be written in Python, perhaps due to its network effect in this programming genre and Python’s already extensive data processing library ecosystem. Don’t hang your hat on just these conversations. This is but the tip of the iceberg. 

The component model enables developers to update specific sections of an application rather than overhauling the entire system. If a security vulnerability emerges in the cryptography component, for instance, only that individual component needs to be upgraded, leaving the rest of the application untouched. Moreover, such updates might be executed in real-time, eliminating the need for a full system shutdown, rebuild, and deployment. This approach could foster more agile application iterations. Furthermore, data storage backends can be interchanged seamlessly without modifying the broader application’s code. Implementing specialized tests, collecting metrics, and debugging could be as straightforward as integrating the appropriate intermediary components into the application without altering the existing code.

This approach will seem revolutionary for developers accustomed to regular code rebuilds and redeployments. What’s thrilling is that this technology is already here. Tools like Fermyon Spin and the Bytecode Alliance’s Wasmtime fully support this component model. And with the Component Model being standardized by the respected W3C (the standards body responsible for HTML and CSS), it’s open for anyone to implement. It’s anticipated that the Component Model will be widely adopted in the Wasm ecosystem within a year.

Join us at KubeCon + CloudNativeCon North America this November 6 – 9 in Chicago for more on Kubernetes and the cloud native ecosystem.

The post What makes WebAssembly special? The Component Model appeared first on SD Times.

]]>
Staying relevant in tech requires continual learning https://sdtimes.com/culture/staying-relevant-in-tech-requires-continual-learning/ Tue, 10 Oct 2023 14:12:45 +0000 https://sdtimes.com/?p=52606 The constantly shifting economy is a reminder to technical professionals seeking new roles that change is the only constant. The Linux Foundation’s 2023 State of Tech Talent Report, viewable here, highlights what some 400 IT hiring managers and staffing professionals view as must-haves for new hires. Here are three insights and the takeaways technical professionals … continue reading

The post Staying relevant in tech requires continual learning appeared first on SD Times.

]]>
The constantly shifting economy is a reminder to technical professionals seeking new roles that change is the only constant. The Linux Foundation’s 2023 State of Tech Talent Report, viewable here, highlights what some 400 IT hiring managers and staffing professionals view as must-haves for new hires. Here are three insights and the takeaways technical professionals should know, as they seek their next challenge.

Insight 1: New hiring continues to focus on developers and newer technologies, especially: cloud/containers (50%); cyber security (50%); and Artificial Intelligence (AI) and Machine Learning (ML) (46%).

As technology development continues to accelerate, hiring managers are seeking technical staff who have existing strong skill sets in high-demand areas. Not surprisingly, evolving technologies such as cloud and cyber security remain in high demand. The fact that respondents said they have cloud native professionals (59%) and cyber security (59%) already on staff, indicates that organizations need to expand teams and expertise.  

But there is more to this. Organizations report having a range of hiring priorities to grow existing teams and capabilities. These include: 

  • Augmented/Virtual Reality
  • AI/ML
  • Blockchain
  • CI/CD 
  • DevOps 
  • Kubernetes

As you consider these factors against your own professional development path, it is important to remember that while AI is dominating hiring trends today, the “hot” technologies could be very different a year or two from now. This is just another reason technical professionals excel when they continuously learn.  

Insight 2: 53% of respondents feel upskilling is extremely important to acquire the skills and knowledge their organization needs.

Whether you are looking for a new role at a new organization or trying to shore up your current employment prospects, training and certifications are key. Many organizations are currently pursuing the approach of upskilling existing team members to close skill gaps. Look for these opportunities and raise your hand. Seeking out and participating in in-house training opportunities helps set you up for success whether you’re looking for your next promotion, a lateral transfer or an opportunity elsewhere.

If your current employer isn’t offering a training program to close critical skill gaps, that doesn’t mean you’re stuck. As an insider, you are in an excellent position to identify where the gaps are for your organization and seek suitable training programs on your own. This will heighten your stature, improve job security and potentially get you promoted. Pursuing in-demand skills also gives you an opportunity to make a case for your employer to pick up the tab.

If necessary, seeking your own certification on personal time gives you more freedom to acquire new skills that interest you and best fit your intended career path.

Insight 3: The majority of respondents agree that certification (73%) and pre-employment testing (81%) are necessary to verify skills.

As candidates navigate this unusual and uncertain job market, one thing is for sure: employers want candidates who have proven and verifiable skills. 

This means seeking out certifications that aren’t based on multiple choice exams, but rather ensuring participants have well-developed coding skills along with the critical thinking needed to solve real-world IT problems. When assessing potential certifications, look for those that include the use of lab environments and that test real coding capabilities from the command prompt.

Similarly look for expertise when assessing training courses, especially ones related to emerging technologies. A good check is the background of the instructors. Are they professional instructors or are they technical professionals themselves, working with new technologies? Instructors who are working with – as well as teaching about – emerging technology are much more likely to bring real life experiences that will help you learn. 

As you consider your next career move as a technical professional, be sure to ask yourself the hard questions on what you want from your next role and beyond. Careers don’t just happen, you have to build your own path. Doing so sets you up to respond to the shifting job economy of today, tomorrow, and in the decades to come.

Not sure where to start your journey? More than half of the Linux Foundation Training & Certification e-Learning catalog is free. The foundation also offers many free tools help new, mid-career and senior professionals determine their next steps, including: 

Join us at KubeCon + CloudNativeCon North America this Nov. 6 – 9 in Chicago for more on Kubernetes and the cloud native ecosystem.

The post Staying relevant in tech requires continual learning appeared first on SD Times.

]]>
NIST publishes new draft framework for integrating supply chain security into CI/CD pipelines https://sdtimes.com/security/nist-publishes-new-draft-framework-for-integrating-supply-chain-security-into-ci-cd-pipelines/ Mon, 11 Sep 2023 18:29:55 +0000 https://sdtimes.com/?p=52247 The National Institute of Standards and Technology (NIST) published a new draft document that outlines strategies for integrating software supply chain security measures into CI/CD pipelines.  Cloud-native applications typically use a microservices architecture with a centralized infrastructure like a service mesh. These applications are often developed using DevSecOps, which uses CI/CD pipelines to guide software … continue reading

The post NIST publishes new draft framework for integrating supply chain security into CI/CD pipelines appeared first on SD Times.

]]>
The National Institute of Standards and Technology (NIST) published a new draft document that outlines strategies for integrating software supply chain security measures into CI/CD pipelines. 

Cloud-native applications typically use a microservices architecture with a centralized infrastructure like a service mesh. These applications are often developed using DevSecOps, which uses CI/CD pipelines to guide software through stages like build, test, package, and deploy, akin to a software supply chain, according to the document.

“This breakdown is very helpful for development organizations, as it provides more concrete guidance on how to secure their environments and processes. One thing that stands out is the emphasis on the definition of roles and, closely related, the identification of granular authorizations for user and service accounts,” said Henrik Plate, security researcher at Endor Labs. “This is necessary to implement access controls for all activities and interactions in the context of CI/CD pipelines according to least-privilege and need-to-know principles. However, the management of all those authorizations across the numerous systems and services invoked during pipeline execution can be challenging.”

Recent analyses of software attacks and vulnerabilities have prompted governments and private-sector organizations in software development, deployment, and integration to prioritize the entire software development lifecycle (SDLC). 

The security of the software supply chain (SSC) relies on the integrity of stages like build, test, package, and deploy, and threats can emerge from malicious actors’ attack vectors as well as from defects introduced when proper diligence is not followed during the SDLC, according to the NIST draft.

“It’s not surprising that the document acknowledges that the ‘extensive set of steps needed for SSC security cannot be implemented all at once in the SDLC of all enterprises without a great deal of disruption to underlying business processes and operations costs,” Plate explained. 

This highlights the timeliness of providing guidance to organizations on implementing high-level recommendations like the Secure Software Development Framework (SSDF), which is a set of fundamental, sound, and secure software development practices based on established secure software development practice documents from organizations such as BSA, OWASP, and SAFECode, according to the NIST draft.

The NIST draft addresses the upcoming self-attestation requirement for software suppliers to declare adherence to SSDF secure development practices for federal agencies. The document aims to clarify expectations in the context of DevSecOps and CI/CD pipelines regarding what is considered necessary, according to Plate.

Plate added that one major concern with the draft is that tools that can improve the SSC like Sigstore and in-toto are not yet widely adopted with only a few open-source ecosystems including npm and select commercial services, having integrated it.

“It will require some time until those technologies are adopted more broadly in various open-source ecosystems and among open-source end users,” Plate added.

Organizations should go beyond simply detecting open-source software defects after they occur. They should also proactively manage open-source dependency risks by considering factors like code quality, project activity, and other risk indicators. A holistic approach to open-source risk management helps reduce both security and operational risks, as outlined in the Top 10 Open Source Dependency Risks, according to Plate. 

This new draft by NIST is intended for a broad group of practitioners in the software industry, including site reliability engineers, software engineers, project and product managers, and security architects and engineers. The public comment period is open through Oct. 13, 2023. See the publication details for a copy of the draft and instructions for submitting comments.

The post NIST publishes new draft framework for integrating supply chain security into CI/CD pipelines appeared first on SD Times.

]]>
WasmCon: State of WebAssembly 2023, initial Wasm landscape from CNCF, and more https://sdtimes.com/softwaredev/wasmcon-state-of-webassembly-2023-initial-wasm-landscape-from-cncf-and-more/ Wed, 06 Sep 2023 19:55:39 +0000 https://sdtimes.com/?p=52199 WebAssembly has grown far beyond its original intent of being used to develop web applications, and can now be found in many corners of the technology landscape. Starting today and continuing tomorrow, many technologists are gathering in Bellevue, Washington for WasmCon to learn more about the technology and hear talks from industry experts. The results … continue reading

The post WasmCon: State of WebAssembly 2023, initial Wasm landscape from CNCF, and more appeared first on SD Times.

]]>
WebAssembly has grown far beyond its original intent of being used to develop web applications, and can now be found in many corners of the technology landscape. Starting today and continuing tomorrow, many technologists are gathering in Bellevue, Washington for WasmCon to learn more about the technology and hear talks from industry experts.

The results of the State of WebAssembly 2023 report were published at the event, revealing that 58% of users are utilizing WebAssembly for web applications, 35% for data visualization, 32% for IoT, and 30% for AI. Other common uses were games, backend services, edge computing, and platform emulation. 

“This indicates that WebAssembly has a lot of potential and can be beneficial to all developers across a multitude of sectors and not just those involved in front-end web development,” the report authors concluded.

When asked what brought them to WebAssembly, 23% said faster loading times, 22% said exploring new use cases and technologies, 20% said to be able to share code between projects, 20% said improved performance over JavaScript, and 19% said efficient execution of computationally intensive tasks. 

One of the benefits of WebAssembly is its portability, and 64% of respondents are porting existing applications to new platforms and 62% are migrating existing applications to new languages. Seventy-six percent of respondents are developing new applications in WebAssembly. 

34% of survey respondents said they are making use of the WebAssembly System Interface (WASI), and another 34% plan to adopt it in the next year. 

The most recent stable iteration of the WASI standard was announced earlier this summer. WASI-Preview-2 focuses on making improvements in three areas: the core WebAssembly specification, WebAssembly Components and WebAssembly Interface Types, and WASI. 

Notable improvements to the core specification included development of Code Wasm Threads Prototype and garbage collection. 

Key updates to WebAssembly Components and WebAssembly Interface Types included integration of component naming and versioning, and the addition of resource and handle types.

“The WebAssembly Component Model is more than just a standard,” Liam Randall, CEO of Cosmonic and co-chair of WasmCon said. “It’s a movement of people that are standardizing on supporting the WebAssembly component model, because of its properties, like radical portability. Components that run on Cosmonic run on any other WebAssembly component framework, as well. And that’s the magic.”

WASI added two new world definitions, which are a complete description of “both imports and exports of a component and may be used to represent the execution environment of a component,” according to the Bytecode Alliance, a nonprofit organization built around WebAssembly standards. The two new world definitions in WASI-Preview-2 are CLI world, which provides the commonly available APIs and command-line facilities, and HTTP Proxy world, which is an environment that captures an intersection of hosts, including HTTP forward and reverse proxies. 

Another major announcement today was the Cloud Native Computing Foundation (CNCF) publishing the initial Wasm landscape, which includes 120 projects spread across 11 categories. 

The 11 categories are grouped into two groups: application development and application deployment. Categories in application development include programming languages, runtimes, application frameworks, edge/bare metal, AI inference, embedded function, and tooling. Application deployment categories include orchestration and management, hosted platform, debugging and observability, and artifacts. 

“As Wasm is adopted across cloud-native projects, products, and services, the CNCF worked together with the Wasm community to create a Wasm landscape to help better understand the scope of the Wasm ecosystem. As the original Cloud Native Landscape helped chart the massive ecosystem around cloud native technologies, we believe the same is needed for Wasm as the ecosystem evolves and grows,” the authors of the landscape wrote in a blog post.

The post WasmCon: State of WebAssembly 2023, initial Wasm landscape from CNCF, and more appeared first on SD Times.

]]>
Google Cloud Next ‘23: Updates to infrastructure, Vertex AI, analytics, and more https://sdtimes.com/ai/google-cloud-next-23-updates-to-infrastructure-vertex-ai-analytics-and-more/ Tue, 29 Aug 2023 19:02:11 +0000 https://sdtimes.com/?p=52153 Google Cloud Next kicked off today, with the company highlighting the progress it’s made over the past year as well as showcasing some of its new offerings.  “We are in an entirely new era of digital transformation, fueled by gen AI. This technology is already improving how businesses operate and how humans interact with one … continue reading

The post Google Cloud Next ‘23: Updates to infrastructure, Vertex AI, analytics, and more appeared first on SD Times.

]]>
Google Cloud Next kicked off today, with the company highlighting the progress it’s made over the past year as well as showcasing some of its new offerings. 

“We are in an entirely new era of digital transformation, fueled by gen AI. This technology is already improving how businesses operate and how humans interact with one another. It’s changing the way doctors care for patients, the way people communicate, and even the way workers are kept safe on the job. And this is just the beginning,” Thomas Kurian, CEO of Google Cloud, wrote in a blog post

In addition to announcing the general availability of Duet AI in Google Workspace, the company shared updates across infrastructure, Vertex AI, analytics, and security, as well as sharing new innovations from some of its partners. 

Infrastructure

The company announced several new capabilities related to infrastructure. According to Google Cloud, over the past 25 years it has grown its network to include 38 cloud regions around the world. In addition, over 70% of gen AI unicorns are running their training models on Google Cloud.

Today the company is announcing Cloud TPU v5e. Cloud TPUs are AI accelerators that are optimized for training large AI models. Cloud TPU v5e can scale to tens of thousands of chips and provides a 2x improvement in training performance per dollar and 2.5x improvement in inference performance per dollar compared to Cloud TPU v4. 

The company also announced the upcoming availability of A3 VMs and a Cross-Cloud network that makes it easy to access Google services from any cloud. 

Vertex AI

It also made several updates to Vertex AI, which is an AI platform for building, deploying, and scaling machine learning models. 

Vertex AI Search and Conversation is now generally available, making it easier to develop generative search and conversation capabilities. 

Google Cloud has also added new models to the platform, including Meta’s Llama 2 and Code Llama and Technology Innovative Institute’s Falcon LLM. It announced Claude 2 from Anthropic will be coming to the platform at some point as well. 

 

New tools and capabilities were also added to the platform, including new tools for tuning models, extensions, digital watermarking, and Colab Enterprise. 

Data analytics

“Data sits at the center of gen AI, which is why we are bringing new capabilities to Google’s Data and AI Cloud that will help unlock new insights and boost productivity for data teams,” Kurian wrote. 

The company announced BigQuery Studio, which brings together data engineering, analytics, and predictive analysis into a single interface.

It also revealed AlloyDB AI, which provides capabilities for buidling generative AI applications. 

Google Cloud also works with a number of partners to provide customers with new solutions, and some of these partners are also releasing new capabilities to help customers accelerate their generative AI development. These companies include Confluent, DataRobot, Dataiku, Datastax, Elastic, MongoDB, Neo4j, Redis, SingleStore, and Starburst. 

It has also worked with Acxiom, Bloomberg, TransUnion, and ZoomInfo to add more training datasets to Analytics Hub.

Security

Google Cloud announced Mandiant Hunt for Chronicle, which is a new service that allows customers to analyze security data and gain support in their security efforts.

The company also announced Cloud Firewall Plus, which addes advanced threat protection to its firewall service. Another new service is Network Service Integration Manager, which can be used to integrate third-party firewall virtual appliances. 

“Google Cloud is the only leading security provider that brings together the essential combination of frontline intelligence and expertise, a modern SecOps platform, and a trusted cloud foundation, all infused with the power of gen AI, to help drive the security outcomes you’re looking to achieve,” Kurian wrote. 

Partner updates

As Google Cloud works with a number of different companies, the company is also highlighting some of the new capabilities that partners have created using Google services.

Docusign announced new generative AI features that it built with Vertex AI, such as the new smart contract assistant, which can summarize, explain, and answer questions about contracts or documents.

SAP has been working with Vertex AI as well to build new capabilities to support different business use cases, such as “streamlining automotive manufacturing or improving sustainability.”

Another company working with Google Cloud is Workday. It has been working on new capabilities using Google Cloud, such as being able to generate job descriptions.

The post Google Cloud Next ‘23: Updates to infrastructure, Vertex AI, analytics, and more appeared first on SD Times.

]]>