HCL Archives - SD Times https://sdtimes.com/tag/hcl/ Software Development News Wed, 05 Jun 2024 18:27:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg HCL Archives - SD Times https://sdtimes.com/tag/hcl/ 32 32 Discerning reality from the hype around AI https://sdtimes.com/ai/discerning-reality-from-the-hype-around-ai/ Mon, 03 Jun 2024 16:22:34 +0000 https://sdtimes.com/?p=54782 When it comes to artificial intelligence and applying it to software development, it’s hard to discern between the hype and the reality of what can be done with it today. The presentation of AI in movies makes the technology seem scary and that in the not-too-distant future humans will be slaves to the machines.  But … continue reading

The post Discerning reality from the hype around AI appeared first on SD Times.

]]>
When it comes to artificial intelligence and applying it to software development, it’s hard to discern between the hype and the reality of what can be done with it today.

The presentation of AI in movies makes the technology seem scary and that in the not-too-distant future humans will be slaves to the machines.  But other films show AI being used for all kinds of things that are way in the future – and most likely unreal. The reality, of course, is somewhere in between.

While there is a need to tread carefully into the AI realm, what has been done already, especially in the software life cycle, has shown how helpful it can be. AI is already saving developers from mundane tasks while also serving as a partner – a second set of eyes – to help with coding issues and identifying potential problems.

Kristofer Duer, Lead Cognitive Researcher at HCLSoftware, noted that machine learning and AI isn’t yet as it is seen, for example, in the “Terminator” movies. “It doesn’t have discernment yet, and it doesn’t really understand morality at all,” Duer said. “It doesn’t really understand more than you think it should understand. “What it can do well is pattern matching; it can pluck out the commonalities in collections of data.”

Pros and cons of ChatGPT

Organizations are finding the most interest in generative AI and large language models, where they can absorb data and distill it into human-consumable formats. ChatGPT has perhaps had its tires kicked the most, yielding volumes of information, but which is not always accurate. Duer said he’s thrown security problems at ChatGPT and it has proven it can understand snippets of code that are problematic almost every time. When it comes to “identifying the problem and summarizing what you need to worry about, it’s pretty damn good.”

One thing it doesn’t do well, though, is understand when it’s wrong. Duer said when ChatGPT is wrong, it’s confident about being wrong. ChatGPT “can hallucinate horribly, but it doesn’t have that discernment to understand what it’s saying is absolute drivel. It’s like, ‘Draw me a tank,’ and it’s a cat or something like that, or a tank without a turret. It’s just wildly off. “

Rob Cuddy, Customer Experience Executive at HCLSoftware, added that in a lot of ways, this is like trying to parent a pre-kindergarten child. “If you’ve ever been on a playground with them, or you show them something, or they watch something, and they come up with some conclusion you never expected, and yet they are – to Kris’s point –100% confident in what they’re saying. To me, AI is like that. It’s so dependent on their experience and on the environment and what they’re currently seeing as to the conclusion that they come up with.”

Like any relationship, the one between IT organizations and AI is a matter of trust. You build it to find patterns in data, or ask it to find vulnerabilities in code, and it returns an answer. But is that the correct answer?

Colin Bell, the HCL AppScan CTO at HCLSoftware, said he’s worried about developers becoming over-reliant upon generative AI, as he is seeing a reliance on things like Meta’s Code Llama and Google’s Copilot to develop applications. But those models are only as good as what they have been trained on. “Well, I asked the Gen AI model to generate this bit of code for me, and it came back and I asked it to be secure as well. So it came back with that code. So therefore, I trust it. But should we be trusting it?”

Bell added that now, with AI tools, less-abled developers can create applications by giving the model some specifications and getting back code, and now they think their job for the day is done. “In the past, you would have had to troubleshoot, go through and look at different things” in the code, he said. “So that whole dynamic of what the developer is doing is changing. And I think AI is probably creating more work for application security, because there’s more code getting generated.”

Duer mentioned that despite the advances in AI, it will still err with fixes that could even make security worse. “You can’t just point AI to a repo and say, ‘Go crazy,’ ” he said. “You still need a scanning tool to point you to the X on the map where you need to start looking as a human.” He mentioned that AI in its current state seems to  be correct between 40% and 60% of the time.

Bell also noted the importance of having a human do a level of triage. AI, he said, will make vulnerability assessment more understandable and clear to the analysts sitting in the middle. “If you look at organizations, large financial organizations or organizations that treat their application security seriously, they still want that person in the middle to do that level of triage and audit. It’s just that AI will make that a little bit easier for them.”

Mitigating risks of using AI

Duer said HCLSoftware uses different processes to mitigate the risks of using AI. One, he said, is intelligent finding analytics (IFA), where they use AI to limit the amount of findings presented to the user. The other is something called intelligent code analytics (ICA), which tries to determine what the security information of methods might be, or APIs. 

“The history behind the two AI pieces we have built into AppScan is interesting,” Duer explained. “We were making our first foray into the cloud and needed an answer for triage. We had to ask ourselves new and very different questions.  For example, how do we handle simple ‘boring’ things like source->sink combinations such as file->file copy?  Yes, something could be an attack vector but is it ‘attackable’ enough to present to a human developer? Simply put, we could not present the same amount of findings like we had in the past.  So, our goal with IFA was not to make a fully locked-down house of protection around all pieces of our code, because that is impossible if you want to do anything with any kind of user input. Instead we wanted to provide meaningful information in a way that was immediately actionable.

“We first tried out a rudimentary version of IFA to see if Machine Learning could be applied to the problem of ‘is this finding interesting,’ ” he continued. “Initial tests came back showing over 90% effectiveness on a very small sample size of test data. This gave the needed confidence to expand the use case to our trace flow languages.  Using attributes that represent what a human reviewer would look at in a finding to determine if a developer should review the problem, we are able to confidently say most findings our engine generates with boring characteristics are now excluded as ‘noise.’ ”  

This, Duer said, automatically saves real humans countless hours of work. “In one of our more famous examples, we took an assessment with over 400k findings down to roughly 400 a human would need to review. That is a tremendous amount of focus generated by a scan into the things which are truly important to look at.”

While Duer acknowledged the months and even years it can take to prepare data to be fed into a model, when it came to AI for auto-remediation, Cuddy picked up on the liability factor. “Let’s say you’re an auto-remediation vendor, and you’re supplying fixes and recommendations, and now someone adopts those into their code, and it’s breached, or you have an incident or something goes wrong. Whose fault is it? So there’s those conversations that still sort of have to be worked out. And I think every organization that is looking at this, or would even consider adopting some form of auto-remediation is still going to need that man in the middle of validating that recommendation, for the purposes of incurring that liability, just like we do every other risk assessment. At the end of the day, it’s how much [risk] can we really tolerate?” 

To sum it all up, organizations have important decisions to make regarding security, and adopting AI. How much risk can they accept in their code? If it breaks, or is broken into, what’s the bottom line for the company? As for AI, will there come a time when what it creates can be trusted, without laborious validation to ensure accuracy and meet compliance and legal requirements? 

Will tomorrow’s reality ever meet today’s hype?

 

The post Discerning reality from the hype around AI appeared first on SD Times.

]]>
premium The importance of security testing https://sdtimes.com/test/the-importance-of-security-testing/ Thu, 28 Mar 2024 19:07:47 +0000 https://sdtimes.com/?p=53443 With more development teams today using open-source and third-party components to build out their applications, the biggest area of concern for security teams has become the API. This is where vulnerabilities are likely to arise, as keeping on top of updating those interfaces has lagged. In a recent survey, the research firm Forrester asked security … continue reading

The post <span class="sdt-premium">premium</span> The importance of security testing appeared first on SD Times.

]]>
With more development teams today using open-source and third-party components to build out their applications, the biggest area of concern for security teams has become the API. This is where vulnerabilities are likely to arise, as keeping on top of updating those interfaces has lagged.

In a recent survey, the research firm Forrester asked security decision makers in which phase of the application lifecycle did they plan to adopt the following technologies.  Static application security testing (SAST) was at 34%, software composition analysis (SCA) was 37%, dynamic application security testing (DAST) was 50% and interactive application security testing (IAST) was at 40%. Janet Worthington, a senior analyst at Forrester advising security and risk professionals, said the number of people planning to adopt SAST was low because it’s already well-known and people have already implemented the practice and tools.

One of the drivers for that adoption was the awakening created by the log4j vulnerability, where, she said, developers using open source understand direct dependencies but might not consider dependencies of dependencies.

Open source and SCA

According to Forrester research, 53% of breaches from external attacks are attributed to the application and the application layer. Worthington explained that while organizations are implementing SAST, DAST and SCA, they are not implementing it for all of their applications. “When we look at the different tools like SAST and SCA, for example, we’re seeing more people actually running software composition analysis on their customer-facing applications,” she said. “And SAST is getting there as well, but almost 75% of the respondents who we asked are running SCA on all of their external-facing applications, and that, if you can believe it, is much larger than web application firewalls, and WAFs are actually there to protect all your customer-facing applications. Less than 40% of the respondents will say they cover all their applications.”

Worthington went on to say that more organizations are seeing the need for software composition analysis because of those breaches, but added that a problem with security testing today is that some of the older tools make it harder to integrate early on in the development life cycle. That is when developers are writing their code, committing code in the CI/CD pipeline, and on merge requests. “The reason we’re seeing more SCA and SAST tools there is because developers get that immediate feedback of, hey, there’s something up with the code that you just checked in. It’s still going to be in the context of what they’re thinking about before they move on to the next sprint. And it’s the best place to kind of give them that feedback.”

RELATED CONTENT: A guide to security testing tools

The best tools, she said, are not only doing that, but they’re providing very good remediation guidance. “What I mean by that is, they’re providing code examples, to say, ‘Hey, somebody found something similar to what you’re trying to do. Want to fix it this way?'”

Rob Cuddy, customer experience executive at HCL Software, said the company is seeing an uptick in remediation. Engineers, he said, say, “’I can find stuff really well, but I don’t know how to fix it. So help me do that.’ Auto remediation, I think, is going to be something that continues to grow.”

Securing APIs

When asked what the respondents were planning to use during the development phase, Worthington said, 50% said they are planning to implement DAST in development. “Five years ago you wouldn’t have seen that, and what this really calls attention to is API security,” Worthington said. “[That is] something everyone is trying to get a handle on in terms of what APIs they have, the inventory, what APIs are governed, and what APIs are secured in production.”

And now, she added, people are putting more emphasis on trying to understand what APIs they have, and what vulnerabilities may exist in them, during the pre-release phase or prior to production. DAST in development signals an API security approach, she said, because “as you’re developing, you develop the APIs first before you develop your web application.” Forrester, she said, is seeing that as an indicator of companies embracing DevSecOps, and that they are looking to test those APIs early in the development cycle.

API security also has a part in software supply chain security, with IAST playing a growing role, and encompassing parts of SCA as well, according to Colin Bell, AppScan CTO at HCL Software. “Supply chain is more a process than it is necessarily any feature of a product,” Bell said. “Products feed into that. So SAST and DAST and IAST all feed into the software supply chain, but bringing that together is something that we’re working on, and maybe even looking at partners to help.”

Forrester’s Worthington explained that DAST really is black box testing, meaning it doesn’t have any insights into the application. “You typically have to have a running version of your web application up, and it’s sending HTTP requests to try and simulate an attacker,” she said. “Now we’re seeing more developer-focused test tools that don’t actually need to hit the web application, they can hit the APIs. And that’s now where you’re going to secure things – at the API level.”

The way this works, she said, is you use your own functional tests that you use for QA, like smoke tests and automated functional tests. And what IAST does is it watches everything that the application is doing and tries to figure out if there are any vulnerable code paths.

Introducing AI into security

Cuddy and Bell both said they are seeing more organizations building AI and machine learning into their offerings, particularly in the areas of cloud security, governance and risk management.

Historically, organizations have operated with a level of what is acceptable risk and what is not, and have understood their threshold. Yet cybersecurity has changed that dramatically, such as when a zero-day event occurs but organizations haven’t been able to assess that risk before. 

“The best example we’ve had recently of this is what happened with the log4j scenario, where all of a sudden, something that people had been using for a decade, that was completely benign, we found one use case that suddenly means we can get remote code execution and take over,” Cuddy said. “So how do you assess that kind of risk? If you’re primarily basing risk on an insurance threshold or a cost metric, you may be in a little bit of trouble, because things that today are under that threshold that you think are not a problem could suddenly turn into one a year later.”

That, he said, is where machine learning and AI come in, with the ability to run thousands – if not millions – of scenarios to see if something within the application can be exploited in a particular fashion. And Cuddy pointed out that as most organizations are using AI to prevent attacks, there are unethical people using AI to find vulnerabilities to exploit. 

He predicted that five or 10 years down the road, you will ask AI to generate an application according to the data input and prompts it is given.  And the AI will write code, but it’ll be the most efficient, machine-to-machine code that humans might not even understand, he noted. 

That will turn around the need for developers. But it comes back to the question of how far out is that going to happen. “Then,” Bell said, “it becomes much more important to worry about, and testing now becomes more important. And we’ll probably move more towards the traditional testing of the finished product and black box testing, as opposed to testing the code, because what’s the point of testing the code when we can’t read the code? It becomes a very different approach.”

Governance, risk and compliance

Cuddy said HCL is seeing the roles of governance, risk and compliance coming together, where in a lot of organizations, those tend to be three different disciplines. And there’s a push for having them work together and connect seamlessly. “And we see that showing up in the regulations themselves,” he said. 

“Things like NYDFS [New York Department of Financial Services] regulation is one of my favorite examples of this,” he continued. “Years ago, they would say things like you have to have a robust application security program, and we’d all scratch our heads trying to figure out what robust meant. Now, when you go and look, you have a very detailed listing of all of the different aspects that you now have to comply with. And those are audited every year. And you have to have people dedicated to that responsibility. So we’re seeing the regulations are now catching up with that, and making the specificity drive the conversation forward.”

The cost of cybersecurity

The cost of cybersecurity attacks continues to climb as organizations fail to implement safeguards necessary to defend against ransomware attacks. Cuddy discussed the costs of implementing security versus the cost of paying a ransom.

“A year ago, there were probably a lot more of the hey, you know, look at the level, pay the ransom, it’s easier,” he said. But, even if organizations pay the ransom, Cuddy said “there’s no guarantee that if we pay the ransom, we’re going to get a key that actually works, that’s going to decrypt everything.”

But cyber insurance companies have been paying out huge sums and are now requiring organizations to do their own due diligence, and are raising the bar on what you need to do to remain insured. “They have gotten smart and they’ve realized ‘Hey, we’re paying out an awful lot in these ransomware things. So you better have some due diligence.’ And so what’s happening now is they are raising the bar on what’s going to happen to you to stay insured.”

“MGM could tell you their horror stories of being down and literally having everything down – every slot machine, every ATM machine, every cash register,” Cuddy said. And again, there’s no guarantee that if you pay off the ransom, that you’re going to be fine. “In fact,” he added, “I would argue you’re likely to be attacked again, by the same group. Because now they’ll just go somewhere else and ransom something else. So I think the cost of not doing it is worse than the cost of implementing good security practices and good measures to be able to deal with that.” 

When applications are used in unexpected ways

Software testers repeatedly say it’s impossible to test for ways people might use an application that is not intended. How can you defend against something that you haven’t even thought of?

Rob Cuddy, customer experience executive at HCL Software, tells of how he learned of the log4j vulnerability.

“Honestly, I found out about it through Minecraft, that my son was playing Minecraft that day. And I immediately ran up into his room, and I’m like, ‘Hey, are you seeing any bizarre things coming through in the chat here that look like weird textures that don’t make any sense?’ So who would have anticipated that?”

Cuddy also related a story from earlier in his career about unintended use and how it was dealt with and how organizations harden against that.

“There is always going to be that edge case that your average developer didn’t think about,” he began. “Earlier in my career, doing finite element modeling, I was using a three-dimensional tool, and I was playing around in it one day, and you could make a join of two planes together with a fillet. And I had asked for a radius on that. Well, I didn’t know any better. So I started using just typical numbers, right? 0, 180, 90, whatever. One of them, I believe it was 90 degrees, caused the software to crash, the window just completely disappeared, everything died.

“So I filed a ticket on it, thinking our software shouldn’t do that. Couple of days later, I get a much more senior gentleman running into my office going, ‘Did you file this? What the heck is wrong with you? Like this is a mathematical impossibility. There’s no such thing as a 90-degree fillet radius.’ But my argument to him was it shouldn’t crash. Long story short, I talk with his manager, and it’s basically yes, software shouldn’t crash, we need to go fix this. So that senior guy never thought that a young, inexperienced, just fresh out of college guy would come in and misuse the software in a way that was mathematically impossible. So he never accounted for it. So there was nothing to fix. But one day, it happened, right. That’s what’s going on in security, somebody’s going to attack in a way that we have no idea of, and it’s going to happen. And can we respond at that point?”  

The post <span class="sdt-premium">premium</span> The importance of security testing appeared first on SD Times.

]]>
HCL Software announces rebrand of DevOps portfolio https://sdtimes.com/devops/hcl-software-announces-rebrand-of-devops-portfolio/ Thu, 07 Dec 2023 00:55:43 +0000 https://sdtimes.com/?p=53259 HCL Software has announced it has renamed its DevOps portfolio to better align products with their core functionality. The company hopes that with this rebrand, it will be easier for customers to navigate the portfolio and get to the right product. The company recently hosted a webinar to announce the changes, where Chris Haggan, head … continue reading

The post HCL Software announces rebrand of DevOps portfolio appeared first on SD Times.

]]>
HCL Software has announced it has renamed its DevOps portfolio to better align products with their core functionality. The company hopes that with this rebrand, it will be easier for customers to navigate the portfolio and get to the right product.

The company recently hosted a webinar to announce the changes, where Chris Haggan, head of product for DevOps at HCL Software, explained that the old names were a “bit opaque.” “They don’t necessarily convey a sense of family,” he said. “They’re not straightforward for people to understand what each product does.”

The name changes are as follows:

  • HCL Software DevOps → HCL DevOps Automation
  • HCL Accelerate → HCL DevOps Velocity
  • HCL Launch → HCL DevOps Deploy
  • HCL OneTest → HCL DevOps Test
  • HCL Compass → HCL DevOps Plan
  • HCL VersionVault → HCL DevOps Code ClearCase
  • HCL RTist → HCL DevOps Model RealTime
  • HCL RTist in Code → HCL DevOps Code RealTime

“With this refresh, we’re going to provide customers with a much more direct understanding of each product’s core functionality, emphasizing our integrated approach across the DevOps portfolio,” he said. 

He also clarified that only the names have been changed, and that as of now, no changes have been made to the functionality of any of these applications. However, there will continue to be growth and investment into the products, and the company will have some new releases to share later this month. 

 

The post HCL Software announces rebrand of DevOps portfolio appeared first on SD Times.

]]>
A guide to release automation tools https://sdtimes.com/ai/a-guide-to-release-automation-tools-2/ Mon, 03 Oct 2022 14:13:38 +0000 https://sdtimes.com/?p=49076 The following is a listing of API management tool providers, along with a brief description of their offerings. HCL Accelerate is a data-driven value stream management platform that automates the delivery and interpretation of data so businesses can make faster, more strategic decisions and streamline processes. By integrating with the tools you’re already using, HCL … continue reading

The post A guide to release automation tools appeared first on SD Times.

]]>
The following is a listing of API management tool providers, along with a brief description of their offerings.

HCL Accelerate is a data-driven value stream management platform that automates the delivery and interpretation of data so businesses can make faster, more strategic decisions and streamline processes. By integrating with the tools you’re already using, HCL Accelerate aggregates data from across your DevOps pipeline to give you actionable insights so you can get the most out of your DevOps investments. HCL Accelerate is part of HCL Software DevOps, a comprehensive DevOps product suite comprised of powerful, industry-proven software solutions.

Octopus Deploy sets the standard for deployment automation for DevOps. We help software teams deploy freely – when and where they need, in a streamlined, routine way. More than 3,000 organizations and 350,000 users worldwide use its universal deployment automation tool and framework to make their complex deployments easy. From modern containers and microservices to trusted legacy applications, Octopus orchestrates software delivery in data centers, multi-cloud, and hybrid IT infrastructure. 

Atlassian: Bitbucket Pipelines is a modern cloud-based continuous delivery service that automates the code from test to production. Bamboo is Atlassian’s on-premises option with first-class support for the “delivery” aspect of Continuous Delivery, tying automated builds, tests and releases together in a single workflow. 

CA Technologies, A Broadcom Company: CA Technologies’ solutions address the wide range of capabilities necessary to minimize friction in the pipeline to achieve business agility and compete in today’s marketplace. These solutions include everything from application life cycle management to release automation to continuous testing to application monitoring—and much more. 

Chef: Chef Automate, the leader in Continuous Automation, provides a platform that enables you to build, deploy and manage your infrastructure and applications collaboratively. Chef Automate works with Chef’s three open-source projects; Chef for infrastructure automation, Habitat for application automation, and Inspec for compliance automation, as well as associated tools. 

CloudBees: The CloudBees Suite builds on continuous integration and continuous delivery automation, adding a layer of governance, visibility and insights necessary to achieve optimum efficiency and control new risks. This automated software delivery system is becoming the most mission-critical business system in the modern enterprise.

Digital.ai: The company’s Deploy product helps organizations automate and standardize complex, enterprise-scale application deployments to any environment — from mainframes and middleware to containers and the cloud. Speed up deployments with increased reliability. Enable self-service deployment while maintaining governance and control.

GitLab: GitLab’s built-in continuous integration and continuous deployment offerings enable developers to easily monitor the progress of tests and build pipelines, then deploy with the confidence across multiple environments — with minimal human interaction. 

IBM: UrbanCode Deploy accelerates delivery of software change to any platform – from containers on cloud to mainframe in data centers. Manage build configurations and build infrastructures at scale. Release interdependent applications with pipelines of pipelines, plan release events, orchestrate simultaneous deployments of multiple applications. 

LaunchDarkly: is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. Over 1,500 organizations around the world — including Atlassian, IBM, and Square — use LaunchDarkly to control the entire feature lifecycle from concept, to launch, to value.

Micro Focus: ALM Octane provides a framework for a quality-oriented approach to software delivery that reduces the cost of resolution, enables faster delivery, and enables adaptability at scale. Deployment Automation seamlessly enables deployment pipeline automation reducing cycle times and providing rapid feedback on deployments and releases across all your environments.

Microsoft: Microsoft’s Azure DevOps Services solution features Azure Pipelines for CI/CD initiatives; Azure Boards for planning and tracking; Azure Artifacts for creating, hosting and sharing packages; Azure Repos for collaboration; and Azure Test Plans for testing and shipping.  

Puppet Enterprise offers full life-cycle infrastructure management, including configuration management.  Puppet Enterprise creates end-to-end infrastructure automation from the build process through continuous operations (with ongoing patching and policy enforcement) to end-of-life while removing manual/repetitive steps throughout the operational process.

VMware: With VMware Tanzu, you can automate the delivery of containerized workloads, and proactively manage apps in production. It’s all about freeing developers to do their thing: build great apps. Enterprises that use Tanzu Advanced benefit from developer velocity, security from code to customer, and operator efficiency.

The post A guide to release automation tools appeared first on SD Times.

]]>
How these solution providers support release automation https://sdtimes.com/ai/how-these-solution-providers-support-release-automation/ Mon, 03 Oct 2022 14:08:54 +0000 https://sdtimes.com/?p=49071 We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below. HCL Software Ryley Robinson, product marketing manager at HCL Software HCL Accelerate with HCL Launch is an enterprise grade continuous release orchestration solution within the powerful HCL DevSecOps tool chain. HCL Accelerate is … continue reading

The post How these solution providers support release automation appeared first on SD Times.

]]>
We asked these tool providers to share more information on how their solutions help companies with automated testing. Their responses are below.

HCL Software

Ryley Robinson, product marketing manager at HCL Software

HCL Accelerate with HCL Launch is an enterprise grade continuous release orchestration solution within the powerful HCL DevSecOps tool chain. HCL Accelerate is the Value Stream Management product with broad release management capabilities. With HCL Accelerate’s plugins that can integrate to any deployment solution through native plugins or through API-driven-pipeline, enterprises can easily orchestrate their complex releases through HCL Accelerate.

As enterprises break their monolithic applications into cloud-native microservices, it becomes even more important to have the releases orchestrated through an enterprise grade release management product like HCL Accelerate that can understand the dependencies, complexities, and all-or-nothing deployment strategies. 

HCL Accelerate provides the automated governance for the release orchestration with data-driven insights. Accelerate as a Value Stream Management tool and as a release orchestrator has deep insights into DevOps processes and can help enterprises to identify the bottlenecks even before individual teams realize the pain. HCL Accelerate provides visibility of the entire pipeline and provides a bird’s eye view of where and when the changes that would hugely impact the business are in the delivery pipeline. Accelerate also provides full visibility for developers on the impact of the changes that they are working on. 

HCL Accelerate working with HCL Launch provides the best-in-class release management / deployment automation solution out there. With HCL Launch’s “deploy from anywhere to anywhere” capabilities, enterprise DevOps teams are delighted to find they can automate deployments to a broad mix of environments such as mainframes, microservices, on-prem, public, private, and hybrid cloud. HCL Launch works everywhere with a massive plugin site that includes over 300 integrations and easily connects source configuration repositories, change management systems, or middleware. 

Octopus Deploy 

Colin Bowern, senior vice president of product at Octopus Deploy

Octopus Deploy is the universal deployment automation company. We help software teams deploy software in a continuous and stress-free way. 

Octopus simplifies complex deployment processes allowing software solutions to be delivered faster and in a unified way to various deployment environments. Octopus Deploy addresses the needs of enterprise organizations which no longer need to choose between the speed and the quality of their software deployments. It also provides robust permission and auditing capabilities to ensure internal and external compliance. Octopus Deploy integrates out-of-the-box with leading CI/CD solutions to streamline deployment pipelines and to provide additional value to organizations’ existing systems, such as ITSM services such as Jira and ServiceNow.

Based on its work with more than 350,000 IT professionals, Octopus Deploy has seen firsthand how the success of DevOps aligns with great deployment automation practices. It helps enterprise DevOps teams deploy software more effectively by eliminating error-prone manual processes associated with software change management and provides insights into DevOps performance based on the four DORA metrics.

Octopus Deploy is offered as both a self-hosted and SaaS offering. As part of the growing trend toward moving to SaaS in the DevOps tooling space, the company is committed to making Octopus trustworthy, secure and scalable.   

The post How these solution providers support release automation appeared first on SD Times.

]]>
Release automation: Key to winning the time-to-market race https://sdtimes.com/ai/release-automation-key-to-winning-the-time-to-market-race/ Mon, 03 Oct 2022 14:01:23 +0000 https://sdtimes.com/?p=49064 As the number of components that organizations have to manage throughout their application delivery process grows, companies are looking to get more from their application release automation (ARA) platforms. These platforms can help organizations automate the process of releasing software applications and may include tools for managing code changes, deployments, testing, and other aspects of … continue reading

The post Release automation: Key to winning the time-to-market race appeared first on SD Times.

]]>
As the number of components that organizations have to manage throughout their application delivery process grows, companies are looking to get more from their application release automation (ARA) platforms. These platforms can help organizations automate the process of releasing software applications and may include tools for managing code changes, deployments, testing, and other aspects of the release process.

Today, nearly all (90.5%) organizations are releasing features with a lead time of a month or less, which increased by 26 percentage points from 2020. In addition, organizations that are delivering features in 1-2 weeks doubled between 2020 and 2021, according to IDC’s U.S. Accelerated Application Delivery Survey from January 2022. 

Pushing applications through to production has led to many difficulties for organizations, from consuming a lot of time to resulting in a lot of errors, especially when there are a lot of applications to release.

“Enterprises are now looking to automate the deployment of applications that have a hybrid tech stack, as well as multiple microservices with heavy version dependencies,” said Ryley Robinson, product marketing manager at HCL Software. “For example, this can be a single application with some on-prem deployments, legacy deployments on mainframes, IBM iSeries, and some cloud deployments across different hyper scales.”

On top of that, enterprises want to do all-or-none, canary, blue-green, rolling, and/or A/B deployments – all from a single ARA solution. 

“Organizations have automated most of their deployment processes, but they still need to understand that organizations always go through modernization initiatives on their business-critical applications to remove the technical debt and to get the benefit of the latest innovations in the technology world,” Robinson said. “So, release automation is not something that is ‘done once and forget it.’ It is an ongoing process that evolves every week, every month. It is still automated, but there is still a lot to do.”

Three areas to start ARA

Despite its name that suggests its position at the end of a pipeline, release automation can excel in three different areas, according to Colin Bowern, senior vice president of product at Octopus Deploy.

The first is the whole non-production flow, which is where a lot of people get started with release automation. Errors here will have a minimal impact if one gets it wrong.

“This is stuff that you do on a very regular basis, and if you’re coming from a world of manual steps, or fragile scripts, it’s like, boy, it would be a whole lot easier if every time I commit a change to source control, it gets deployed out to a test environment,” Bowern said. “So for a lot of folks, this is the safest way that you can get started.” 

Production, on the other hand, tends to be the second stage, but it’s also where the greatest ROI on release automation comes from, according to Bowern. 

Before release automation, all of an application’s stakeholders had to be on deck for the release in case something went wrong.

The goal of ARA is to make application releases as orderly and stress-free as possible. After all, the process used to deploy to production is the same as the one used to deploy to non-production environments,, Bowern added. 

ARA can automate all of the things that happen on an ad-hoc or scheduled basis around an environment, such as running the troubleshooting, resetting databases, or running scripts. 

While adoption of ARA still has a long way to go, many organizations that decide to leave their manual ways behind first have a bad deployment and realize that they’re down in the pit with an hours-long, human-error-prone process and think there has to be a better way, Bowern said. 

Others decide that they can just use their CI workflow automation tool because it does the builds and the tests. “While it’s a great place to get started when things are very simple, CI tools don’t understand environments. They don’t understand how to do rollbacks. To work around this, teams will kind of get some consistency by creating reusable workflows, but all of that custom logic and variables and stuff like that creeps in, and it becomes really hard to reuse across projects and is hard to maintain,” Bowern added. 

The third scenario is that people come from stack-specific CD tools such as Kubernetes or Argo, or any tools that were purpose-built for an environment and do it really well. 

“These are great quickstarts to help you do the right thing early inside your stack, but they were designed for that stack and that stack alone, and if you want to deploy your wider enterprise portfolio of apps, you won’t do that on Argo, or if you do, you’ll have to hack around it to make it happen,” Bowern said. 

Many organizations have to juggle multiple tech stacks, data centers, and multiple cloud providers, so ARA helps to work around some of those stack-specific tools and ensure compliance on deployment type, whether it’s .NET, Java, node, VMs, containers, or serverless, Bowern explained. 

“You can find out whether you need to improve flow just by going and talking to the engineers on the team. The question I love to ask is if I needed you to deploy something to production today, a small change that was blocking the business, is that a big deal?,” Bowern said. “If the answer is yes, which you’d be surprised to find can be ‘I have to go sign off on this form’ or ‘I have to schedule this window,’ that’s the thing you need to first get rid of so you don’t have that friction and can go on your improvement journey.”

Moving forward, ARA vendors are looking to incorporate or expand on existing AI/ML to handle tasks ranging from automatic code generation with tools like GitHub’s Copilot to testing and deployment  to help address increased software velocity and the complexity of multi-modal deployment platforms, according to Melinda-Carol Ballou, research director of Agile ALM, Quality, and Portfolio Strategies at IDC. 

How microservices affect ARA

ARA has turned out to be particularly useful alongside the growth of microservices, Octopus Deploy’ss Bowern explained. 

“We’ve certainly observed and we hear this because teams started asking us for dependency management, ‘How do I make sure project A doesn’t go out the door before project B?’ And so we see people struggling as they adopt microservices to get that true independence model, and they end up trying to orchestrate different components shipping at different times,” Bowern said. 

Some companies decide that the added complexity that comes with microservices is not worth it, and they instead embrace the monolith where all dependencies are synchronized because it’s all shipped as one big box, according to Bowern. 

Microservices need the concept of snapshots where different versions of different microservices are grouped and tested. A good ARA solution should be able to guarantee that a snapshot containing dependent versions of microservices gets deployed properly and should also be able to assure that what is tested together is deployed together, according to HCL’s Robinson. 

ARA is a process within continuous deployment

The process of ARA is a vital part of continuous deployment, which should be treated as a set of philosophies and principles, Octopus Deploy’s Bowern said. 

Continuous delivery is all about not letting changes sit idly so that they build up into big batches. It says that you’re reducing risk by releasing regularly into the various environments along the way and getting changes moving. 

“And so if you take that as a philosophy, release automation is a really critical tool in that, because you can’t get that speed and do it manually,” Bowern said. “We continually hear from customers that it takes them hours to deploy because it’s not just copying a binary to a server. 

It’s all the steps you need to do to migrate to the database, or bring a load balancer down, and these are all the same things you did last week, or yesterday, or last month. And so automation is truly a part of living that philosophy of continuous delivery, not whether you deploy to production every day.” 

Effective ARA relies on visibility into what’s happening in requirements, development and testing, according to HCL’s Robinson. On top of that, teams need data from quality assurance products like functional, performance, and application security. 

A strong set of plugins can help release managers make the go/no-go decisions. Instead of having multiple manual checklists in spreadsheets, if the release management solution can provide automatic gating based on quality criteria by pulling data from multiple sources, it makes it easy to release software with confidence, Robinson added. 

ARA is also a vital component of value stream management (VSM) where there are huge benefits to having one orchestrator across different middleware, Robinson stated. This can make it easy for the development and operations teams to get going on day one using pre-built templates instead of spending time automating release management for each application. 

ARA tools come with some challenges

However, release automation tools don’t come without some challenges. 

There is a lack of standardization in the field and no one-size-fits-all release automation tool. Each organization has different needs, and no one tool can meet all of them.

Companies that have stalled in the middle of their DevOps journey have failed to address or understand the cultural, organizational, and process changes required to adopt a new way of working with technology. 

These companies invested in automation, with 67% of mid-evolution respondents to Puppet’s 2021 State of DevOps report saying their team has automated most repetitive tasks. But, as an organization, they haven’t addressed the silos and misaligned incentives around deploying software to production that gave rise to the DevOps movement, since 58% of companies reported that multiple handoffs between teams are required for deployment of products and services.

However, the biggest barriers are often cultural and organizational, because effective release automation demands a transition to continuous, agile approaches to development and release management, according to IDC’s Ballou.

“That transition involves a significant shift in how people do what they do, and human beings are way more wired for consistency than we are for change,” Ballou said. “The coordination between business stakeholders and those creating the software enabled by effective agile approaches brings greater relevance to what is deployed.” 

DevOps engineers have to get the balance right 

Whereas developers are usually at the forefront of adopting ARA on the non-production side, when things get more complicated with the service management system, that’s when DevOps engineers typically come onto the scene. 

“They’re a little bit developer and a little bit SRE and their job is to come in and be those experts that help teams go faster on this because teams aren’t used to automating, they’re just used to cutting code,” Octopus Deploy’s Bowern said. “So the DevOps engineers have the kinds of skills that say that my job now is not to understand how to architect applications, but how to get things moving faster.”

Many organizations have a centralized DevOps Center of Excellence (CoE) that maintains the templates for different ARA strategies and the individual application teams benefit from these templates with enough space to do their customization when needed. There is also a huge benefit in sharing the learning across teams in an enterprise and CoEs help with that.

The post Release automation: Key to winning the time-to-market race appeared first on SD Times.

]]>
Stuck in the [DevOps] Middle With You https://sdtimes.com/devops/stuck-in-the-devops-middle-with-you/ Thu, 02 Jun 2022 19:16:26 +0000 https://sdtimes.com/?p=47820 Technology is always changing, and thus the way organizations manage around technology is always changing. There are always new methodologies entering the field, promising various benefits if only you could adopt it correctly.  Many of these fizzle out and remain nothing more than buzzwords, but according to Charles Betz, principal analyst at Forrester, DevOps has … continue reading

The post Stuck in the [DevOps] Middle With You appeared first on SD Times.

]]>
Technology is always changing, and thus the way organizations manage around technology is always changing. There are always new methodologies entering the field, promising various benefits if only you could adopt it correctly. 

Many of these fizzle out and remain nothing more than buzzwords, but according to Charles Betz, principal analyst at Forrester, DevOps has been an exception to this “IT fashion show.”

Despite this, a majority of companies aren’t where they could be when it comes to their DevOps evolutions. 

RELATED CONTENT: 
How HCL Software helps companies evolve their DevOps practices
A guide to DevOps tools

According to Puppet’s 2021 State of DevOps report, the majority of companies practicing DevOps are stuck in the middle of their DevOps evolution. This has remained mostly consistent over the past few years, dropping just 1% since 2018, to 79% of companies. 

In 2021, Puppet found that 18% were at a high level of evolution and 4% were at a low level of evolution. Despite the percentage of companies in the mid-level of evolution, the percentage of those on the high or low end actually has shifted over the past four years of the study. By comparison, in 2018, only 10% were highly evolved while 11% were considered to be at the low portion of DevOps evolution. 

So what is keeping so many companies in the middle? And what exactly does it mean to be in “mid-level evolution?” 

According to Puppet’s report, they define mid-level evolution as companies who have already laid their DevOps foundations. “They have introduced automated testing and version control, hired and/or retrained teams,  and are working to improve their CI/CD pipelines. They’ve managed to start optimizing for individual teams, and if they’ve managed to avoid many of the  foundational dysfunctions from which large organizations can suffer, they’re in a great position to start optimizing for larger departments, the ‘team of teams,’” Puppet explained in the report. 

Always looking to improve

Betz argues that DevOps transformation is never truly done. Even though it might have stalled at a certain point in some organizations, DevOps as a practice in general has largely been a success. 

Rob Cuddy, global application security evangelist at HCL Software, agreed, adding that DevOps is a continual evolution of trying to deliver better quality software faster. “ So, you’re always going to be improving, and looking to improve as you go,” he said. 

Al Wagner, solution architect at HCL Software, added that changing technologies means that DevOps also has to continually change to keep up. It’s a moving target, not a stationary finish line where once you’ve crossed it, you’ve succeeded at DevOps. 

“As DevOps has grown, we discover new problems and new solutions to those problems, where every time we embrace something, if you think about cloud, it has only evolved post this term DevOps, and same with Kubernetes, Docker. So when people get stuck, the beauty of DevOps is it continually evolves and grows, and it’s not locked down by a manifesto,” said Wagner.  

Even so, there are some bottlenecks that companies run into when they’re trying to evolve their DevOps practice. Cuddy believes one bottleneck is a lack of understanding of what you’re doing. He sees this in companies automating for the sake of automating without really knowing why they’re trying to automate some part of the process. 

“The whole goal should be to improve the quality as it goes through the pipeline,” said Cuddy. “But if you’re just running scans, or running tests for the sake of running them, and you’re not doing anything with the results, well, great, you’ve added a lot of automation, but now you’ve created a ton of noise.” 

Three reasons for struggle

Paul Delory, VP analyst at Gartner, also defined three main reasons why he believes an organization might struggle to move forward with DevOps.

First is skills. There are all of these technologies that allow companies to do amazing things, but in order to actually do those amazing things, they need people with the right skillsets on board. Delory explained that a lot of initiatives get stuck because of a lack of talented people. 

Those few that do have the matching skillset you’re looking for don’t tend to stay on the market for very long, and they also get offered really high salaries, which might be difficult for all companies to match.

When companies find themselves in this position, they must look to growing these skills internally instead. But this option is a longer process, so it injects further delays into their DevOps transformation. 

“I think that’s a big part of the reason why a lot of people get stuck on this plateau,” said Delory.

The second reason people stall out in their DevOps transformation is that they might not actually need DevOps in every aspect of their business. 

“If I look at the portfolio of applications that an IT department is asked to support. I think there are a lot of cases where essentially, you don’t have the problem that DevOps solves,” said Delory.  

According to Delory, when talking about DevOps, we’re often speaking of fast moving, line of business applications that are directly impacting revenue. But not every application in the company is going to fit that bill, and thus, won’t really be an ideal candidate for DevOps. 

Delory gave the example of an employee phone directory as an application where applying DevOps wouldn’t make sense. 

“Your employee phone directory is probably a Ruby on Rails app that was written in 2009, and nobody’s touched it since,” said Delory. “Bringing in these kinds of DevOps transformation, cloud transformation, you could do that, but it’s not really necessary, and I don’t think you’re going to see ROI on that in any reasonable time horizon.”

The third factor that Delory thinks keeps people stuck in their DevOps transformation is politics and team structure. 

For example, organizing a central operations team is something that some developers might not be too thrilled about, while others are happy about the change. The developers who don’t want to have to manage their own infrastructure would be ready and willing to hand that over to someone else, and the developers who really like getting their hands dirty and being involved in that aspect would probably be the ones not too happy about having to adopt this new team structure. 

“In all of these conversations around redesigned team boundaries and roles, getting it right is critical. And if you don’t get it right, then that can definitely be a barrier to adoption,” said Delory. 

Cuddy agrees with this sentiment, and believes that the single biggest piece of DevOps is the people, not the tools or processes. 

“If you are not maintaining any kind of an organizational culture that supports DevOps that enables people that builds trust, that allows for flexibility, that allows room to fail fast and grow and learn, you’re gonna get stuck eventually,” said Cuddy. 

He says that a bad culture will always beat out good processes, every single time. This is why it’s so important to focus on getting the team culture to where it needs to be. 

Cuddy believes that in order to successfully change culture, you need leadership buy-in so that change can be enacted not only bottom-up, but top-down. 

This idea has necessitated the need for value stream management. According to Wagner, when companies have been investing significantly in something like DevOps for years, they want to be able to see the relationship between their investments and business outcomes. 

“Leaders may not be seeing a return on investment, and perhaps there’s not as much money coming back to the development teams to improve,” said Wagner. “So it’s really finding those bottlenecks using things like value stream mapping, value stream management, prioritizing, working closer with the leadership and the stakeholders to make sure that we are linking, and that the things we do in the product teams are directly contributing to the business.” 

The post Stuck in the [DevOps] Middle With You appeared first on SD Times.

]]>
A guide to DevOps tools https://sdtimes.com/devops/a-guide-to-devops-tools/ Thu, 02 Jun 2022 19:03:40 +0000 https://sdtimes.com/?p=47823 The following is a listing of DevOps tool providers, along with a brief description of their offerings.  HCL Software is a division of HCL Technologies (HCL) that operates its primary software business. We develop, market, sell, and support over 30 product families in the areas of Customer Experience, Digital Solutions, Secure DevOps, Security and Automation. … continue reading

The post A guide to DevOps tools appeared first on SD Times.

]]>
The following is a listing of DevOps tool providers, along with a brief description of their offerings. 


HCL Software is a division of HCL Technologies (HCL) that operates its primary software business. We develop, market, sell, and support over 30 product families in the areas of Customer Experience, Digital Solutions, Secure DevOps, Security and Automation. Our mission is to drive ultimate customer success of their IT investments through relentless innovation of our software products.

RELATED CONTENT: 
How HCL Software helps companies evolve their DevOps practices
Stuck in the [DevOps] Middle With You

Atlassian offers tools like Jira and Trello, which can be used to make project management easier and enable cross-functional collaboration. Its solutions help companies stay on track as they work to deliver products. In addition to its offerings, it also believes that “great teamwork requires more than just great tools.” To that end, it promotes practices like retrospectives, DACI decision-making framework, defining clear roles and responsibilities, and developing objectives and key results (OKRs)

CircleCI is a continuous integration and delivery platform that enables teams to automate their delivery processes. It provides change validation at every step of the process so that developers can have confidence in their code. It also offers flexibility through the abilities to code in any language and utilize thousands of pre-built integrations. 

CloudBees: The CloudBees Suite builds on continuous integration and continuous delivery automation, adding a layer of governance, visibility and insights necessary to achieve optimum efficiency and control new risks. This automated software delivery system is becoming the most mission-critical business system in the modern enterprise.

Codefresh is a GitOps-based continuous delivery platform that is built with Argo. It offers benefits like progressive delivery, traceability, integrations with CI tools like Jenkins and GitHub Actions, and a universal dashboard for viewing software deliveries. 

Digital.ai: The company’s Deploy product helps organizations automate and standardize complex, enterprise-scale application deployments to any environment — from mainframes and middleware to containers and the cloud. Speed up deployments with increased reliability. Enable self-service deployment while maintaining governance and control.

GitLab: GitLab allows Product, Development, QA, Security, and Operations teams to work concurrently on the same project. GitLab’s built-in continuous integration and continuous deployment offerings enable developers to easily monitor the progress of tests and build pipelines, then deploy with confidence across multiple environments — with minimal human interaction. 

IBM: UrbanCode Deploy accelerates delivery of software change to any platform – from containers on cloud to mainframe in data center. Manage build configurations and build infrastructures at scale. Release interdependent applications with pipelines of pipelines, plan release events, orchestrate simultaneous deployments of multiple applications. Improve DevOps performance with value stream analytics. Use as a stand-alone solution or integrate with other CI/CD tools such as Jenkins. 

JFrog’s DevOps platform offers end-to-end management of software development. DevOps teams can control the flow of their binaries from build to production. Its DevOps portfolio includes tools like JFrog Artifactory for artifact management, JFrog XRay for security and compliance scanning, JFrog Distribution for releasing software, and more. 

Micro Focus ALM Octane is an enterprise DevOps Agile management solution designed to ensure high-quality app delivery. It includes Agile tools for team collaboration, the ability to scale to enterprise Agile tools, and DevOps management.

Microsoft: Microsoft’s Azure DevOps Services solution is a suite of DevOps tools designed to help teams collaborate to deliver high-quality solutions faster. The solution features Azure Pipelines for CI/CD initiatives; Azure Boards for planning and tracking; Azure Artifacts for creating, hosting and sharing packages; Azure Repos for collaboration; and Azure Test Plans for testing and shipping.

Octopus Deploy: Octopus Deploy is an automated release management tool for modern developers and DevOps teams. Features include the ability to promote releases between environments, repeatable and reliable deployments, ability to simplify the most complicated application deployments, an intuitive and easy-to-use dashboard, and first-class platform support.

Opsera provides continuous orchestration of development pipelines in order to enable companies to deliver software faster, safer, and smarter. Its offerings include automated toolchains, no-code pipelines, and end to end visibility. 

Planview’s Enterprise Agile Planning solution enables organizations to adopt and embrace Lean-Agile practices, scale Agile beyond teams, practice Agile Program Management, and better connect strategy to Agile team delivery while continuously improving the flow of work and helping them work smarter and deliver faster. With Planview, choose how you want to scale and when. We’ll help you transform and scale Agile on your terms and timeline.

ServiceNow enables companies to do DevOps at scale. Developers are able to keep using the tools they love while still connecting with ServiceNow’s platform. The company enables automation of administrative tasks, while bringing together both ops and dev teams. 

The post A guide to DevOps tools appeared first on SD Times.

]]>
The Dynamic Workload Console is the one-stop automation platform for users across the business https://sdtimes.com/softwaredev/the-dynamic-workload-console-is-the-one-stop-automation-platform-for-users-across-the-business/ Wed, 01 Jun 2022 15:57:31 +0000 https://sdtimes.com/?p=47776 The Dynamic Workload Console (DWC) has become a core platform for workload automation, providing visibility into everything all in one place. “The designing of a job stream is a key operation for schedulers and application developers to or interconnect business applications and achieve governance and control,” Zaccone said. “Our idea with the new Workload Designer … continue reading

The post The Dynamic Workload Console is the one-stop automation platform for users across the business appeared first on SD Times.

]]>
The Dynamic Workload Console (DWC) has become a core platform for workload automation, providing visibility into everything all in one place.

“The designing of a job stream is a key operation for schedulers and application developers to or interconnect business applications and achieve governance and control,” Zaccone said. “Our idea with the new Workload Designer is to empower what we had and push it to an advanced level to provide everything is needed to our customers.” 

The general goal is also to have a console in which someone who is new to workload automation can manage things like governance and processes in their full lifecycle, from scheduling to the monitoring. This is the first time when users can benefit from having everything all in one place in a one-stop automation platform. 

The new Workload Designer version provides a streamlined single point of control for the design of WA objects, based on security authority for each object, including Workstations, Event Rules, and an integrated Workload Application Template import definition.

One addition in the new release is object listing which is based on hierarchical views displayed in folders. It also enables multiple object editing at the same time. 

“The idea is to empower what we had and to push it to an advanced level to give everything you need for our customers,” Zaccone said. “We also wanted to give customers a place where someone can easily get started and to keep everything under control including all of the processes from the scheduling to the monitoring.”

Since customers are now more often using the Workflow Designer and the Dynamic Workload Console, the expanded contextual help allows them to utilize more of the features that the platform offers. 

The new Workload Designer enables a more responsive, fast and fluid user interface. The Workload Designer landing page immediately gives a full picture of folders and workload definitions status, and the explore area is customizable by the user. Order the counters or pass to a compact view, all based on your preferences. The Workload Designer tables allow you to organize columns and data to view.

“Whether it’s the business user, the admin, the operator, all of them can leverage the Dynamic Workload Console for different reasons,” Zaccone said. 

The business user may want to use the Dynamic Workload Console to define and monitor custom dashboards and KPIs. Meanwhile, the admin is the one defining the eye-level design and the big picture of what is happening so they can define the job stream and the workflow. The operator would most often use the Dynamic Workload Console to define what is available starting from templates. 

Overall, the new Workflow Designer focuses on providing customizability. 

“The ability to define custom dashboards is one of the most appreciated things by the customers,” Zaccone said. “Workload Automation can do many things and interconnected IT and business processes. But at the end of the day, you want to know how things are going. So, it’s important for the tool to show that everything is going fine, or when something is wrong, it enables the operator to quickly do a deep dive very quickly into the problem. 

You can access the DTC from any computer in your environment using a web browser through both secure HTTPS or HTTP protocol. DTC also provides graphical views tools to manage the workload including the graphical view for modelling, plan view for monitoring, a job stream view for monitoring, troubleshooting, and impact analysis, and a preproduction plan view for workload planning. From each view, users can take actions on objects, view their properties, and easily switch between the views. 

Start your 90-day free trial and get hands-on experience with a one-stop automation platform, click here.


About HCL Software
HCL Software, a division of HCL Technologies (HCL) develops, markets, sells, and supports over 30 product families in the areas of Customer Experience, Digital Solutions, DevSecOps, and Security and Automation. HCL Software is the cloud native solution factory for enterprise software and powers millions of apps at more than 20,000 organizations, including over half of the Fortune 1000 and Global 2000 companies. HCL Software’s mission is to drive ultimate customer success with its IT investments through relentless product innovation.


Content provided by SD Times and HCL Software

The post The Dynamic Workload Console is the one-stop automation platform for users across the business appeared first on SD Times.

]]>
Optimize data transfer and integrate file transfer in your automation workflows https://sdtimes.com/data/optimize-data-transfer-and-integrate-file-transfer-in-your-automation-workflows/ Fri, 01 Apr 2022 13:00:01 +0000 https://sdtimes.com/?p=47101 Workload automation is a critical piece of digital transformation. It can enable practitioners to schedule and execute business process workflows, optimize data transfer and processing and cut down on errors and delays in execution of the business processes themselves.  Businesses currently have three main approaches to modernization and digital transformation. One is that they are … continue reading

The post Optimize data transfer and integrate file transfer in your automation workflows appeared first on SD Times.

]]>
Workload automation is a critical piece of digital transformation. It can enable practitioners to schedule and execute business process workflows, optimize data transfer and processing and cut down on errors and delays in execution of the business processes themselves. 

Businesses currently have three main approaches to modernization and digital transformation.

One is that they are in some cases still investing in legacy systems that could be distributed. The second approach is that businesses are looking to readjust and re-architect with a lift-and- shift type of approach to the different applications to run on the cloud. Lastly, they are looking to rebuild and re-invent their applications to become cloud-native. 

All of these different strategies have a common factor: the business processes are interconnected with the platforms and the heterogeneous systems that bring together challenges and risks. 

“Application workloads are no longer sitting in predefined data centers and are now spread across multiple clouds, bringing a challenge that they need to be managed and mitigated,” said Francesca Curzi, the HCL software global sales leader for workload automation, mainframe, and data platform. 

Customers need to embrace a systematic approach, avoiding islands of automation where each context is being managed by a different tool. Organizations also need to manage their data flows as more data becomes available. Here the file transfer capability is becoming more and more important to be really interconnected, Curzi added. 

The new HCL Workload Automation v.10 launched on March 4th offers unique technology to enable this kind of digital transformation and to tackle these challenges. It can execute any type of job anywhere: on-premises or on the cloud on the cloud of one’s choice. The tool leverages historical workload execution data with AI to expose observable data and provide an enhanced operational experience.

“It removes these islands of automation across different applications and brings unique capabilities with advanced models into the market,” said Marco Cardelli, HWA lead product manager.

HCL Workload Automation can optimize data transfers and processing by leveraging a single point of control and integration for MFT, RPA, and big data applications. 

Schedulers and operators will benefit from the tool’s flexibility and executives can feel safer with a robust long-time market leader technology that takes care of business continuity.

All of the plugins that come with the new version provide a way to orchestrate different applications without needing to write a script to manage them. Users of Workload Automation v.10 have a doc plugin panel in the web user interface to define specifically what kind of job they want and they just have to provide parameters to orchestrate it. 

The solution offers ERP integrations such as SAP, Oracle E-Business, PeopleSoft and big data integrations like Informatica, Hadoop, Cognos, DataStage, and more. It offers multiple ways to manage message queues, web services, restful APIs, and more. 

Last, but also very important, HCL is also automating some RPA tools, offering the possibility to orchestrate the execution of the bots, in particular on Automation Anywhere on Blue Prism, as well as IBM RPA planned for this year. 

Users will also benefit from AI and ML capabilities. Version 10 offers anomaly detection and identification of patterns in the workload execution.

“In the future, we also want to take care of noise reduction related to alerts and messages of the product to help our operators to fix job issues, providing root cause analysis, and suggest self- healing based on historical data, and to also improve the usability of the dynamic workload console by allowing AI to help customers to define objects to find features and so on,” Curzi said. 

There is also a new component called the AI Data Advisory available for containers. It uses big data machine learning in LED analytics technologies on Workload Automation data and provides anomaly detection. At that point, it’s possible to use a specific UI that provides historical data analysis for jobs and workstations, empowering operators.

With digital transformation, organizations can take advantage of the most advanced workload scheduling, managed file transfer, and real-time monitoring capabilities solution for continuous automation. In addition, organizations can keep control of their automation processes from a single point of access and monitoring! For more information, click here.

Start your 90 day free trial and get hands-on experience with a one-stop automation platform, click here.

Content provided by SD Times and HCL Workload Automation

The post Optimize data transfer and integrate file transfer in your automation workflows appeared first on SD Times.

]]>