Transitioning Top-Layer Entries And The Display Property In CSS - Related to get, here’s, expensive., automate, are
How AI Agents Are Starting To Automate the Enterprise

So far, 2025 has been the year of AI agents — where generative AI technology is used to automate actions. We’ve seen OpenAI’s Operator debut, demonstrating a prototype agent that can browse the web and do tasks for you. Now a new firm called Orby is bringing that same approach to the enterprise, with a type of AI model it calls a Large Action Model (LAM).
I spoke with Orby’s co-founder and CTO, Will Lu, about agents in the enterprise. Prior to Orby, Lu had been an engineering leader at Google Cloud AI.
So what is a LAM and how exactly is it different from an LLM? Lu explained that — unlike LLMs, which process text or images as input and generate text or images as output — LAMs are designed specifically for automation tasks in enterprise environments. He mentioned Salesforce and SAP as examples of IT software products its LAM has explored, in order to identify tasks that can be automated.
Lu used the word “traces” to describe the workflow data that its foundation LAM, called ActIO, has been collecting. He stated it has collected “over a million traces, and usually a trace can be 10 to 50 steps long.”.
In a clarification email after, Lu expanded on the definition of ‘trace’:
He went on to explain that their software actively explores enterprise software environments ([website], Salesforce, ERP systems) to identify tasks that can be automated. The agent autonomously attempts these tasks, and the best-performing attempts (successful traces) are used to fine-tune the model.
Like most other large language models, Orby has trained ActIO on open web data. However, Lu added that they can also fine-tune using a customer’s proprietary data.
Orby’s solution has similarities to OpenAI’s Operator, which launched near the end of January. Operator, currently only available to Pro people ($200 per month), was described by OpenAI as a “research preview of an agent that can use its own browser to perform tasks for you.” In a review, Kevin Roose of the New York Times called it “more an intriguing demo than a product I’d recommend using — and definitely not something most people need to spend $200 a month on.”.
I asked Lu how Orby compares to OpenAI Operator?
One of the differences, he stated, is that Orby has a concept it calls “grounding.”.
“Basically, grounding is [for] a specific action you want to do — say, for example, submit a analysis. So that’s the action, and then you want to find the elements that can get that thing done, and then trigger it. That’s called the grounding step.”.
This concept comes from a project Orby did alongside Ohio State University, called UGround — described as “a universal visual grounding model for locating the element of an action by pixel coordinates on GUIs.” UGround was trained on 10M elements from [website] screenshots.
“When it comes to really complex, real enterprise use cases, what we expect is technical people to make sure that it runs at scale.”.
Lu also noted that Orby has an AI agent software stack that it offers to enterprises.
“So basically […] we designed it so that people can demonstrate how a task is done. Based on that demonstration, we generate both the description and the code under the description to be run. Then […] developers can come in, look at the description and generated code, and make modifications to their needs — and then run the agent based on the code that the app defined.”.
Lu added that for simple tasks, non-technical employees can run those. But for more complex “actions,” developers are typically involved.
“When it comes to really complex, real enterprise use cases, what we expect is technical people to make sure that it runs at scale. When, for example, a task is currently being done by 100 people, you want to make sure that the virtual machine is set up correctly, the agent is running in the same environment there, and can they get access to all the systems and all the credentials.”.
AI agents, or agentic AI to use the trendy term, has rapidly become a priority for enterprise IT departments to consider. So I asked Lu what advice he’d give to CIOs and other enterprise IT leaders, when considering if and when to use AI agents.
“I think the most critical thing is to find real business pain that people are looking for,” he replied. “And then, when it comes to business pain, we want to identify the steps that are very time-consuming for people.”.
Those steps might be time-consuming for human employees, but “really, really easy for a computer to do,” he added.
One use case from Orby’s end-clients is expense research auditing.
“Almost every enterprise has this process, and the process is kind of tedious,” Lu expressed. “You have to open a analysis, look at all the receipts, look at all the information being filled out, and then check whether the information matches […] or not. And also check those reports against the policies that are defined by the enterprise — like, for example, there’s no alcohol [allowed].”.
“…as long as our agents get access to the system […] we can log into the system and then conduct the work.”.
My instinctual follow-up question as a tech journalist was to ask what APIs Orby’s software connects to — SAP, for example. But Lu confirmed that it’s all done via an AI agent; no APIs are needed.
“That’s the beauty of our solutions. We [Orby’s software] mainly operate those applications as if we were humans operating those systems. So there’s no actual integration needed. So as long as our agents get access to the system, as long as we have the credentials, we can log into the system and then conduct the work.”.
So, how about security concerns? Lu confirmed that security is “always a top ask from almost all the enterprises” and that they work with each customer on that.
Lastly, it’s worth noting that even though Orby’s goal is to help enterprises automate workflows, for now there is always a human in the loop.
“There’s a whole agentic workflow design [that] is a core of our whole offering, because the models today still don’t work 100%, and it will still be that way for a very long time,” mentioned Lu. “So we have this human loop process built in by design.”.
Makulu Linux is a Linux distribution you’ve probably never heard of, which is a shame because it’s pretty cool. This flavor of Linux is based on Debia......
When we introduced GitHub Copilot back in 2021, we had a clear goal: to make developers’ lives easier with an AI pair programmer that helps them write......
The Perceptron Algorithm is one of the earliest and most influential machine learning models, forming the foundation for modern neural networks and su......
Observability Can Get Expensive. Here’s How to Trim Costs

Telemetry data feeds are beneficial for developers and operations teams. However, observability feeds come at a cost. Some large end-people spend tens of millions of dollars annually on an observability solution. Depending on the observability provider, these costs may include security coverage.
CFOs and other financial decision-makers are scrutinizing this pay-as-you-go model increasingly, as they are under pressure to reduce spending. As a result, DevOps teams are being asked to be more selective about the telemetry data they pay for, focusing on observability and service analysis.
As consumers and organizations demand more advanced elements, they certainly won’t want to pay more. Instead, they will look for ways observability providers can help them reduce costs through advanced tools or practices.
Telemetry pipelines have already become a critical component of larger organizations’ observability strategies, particularly where there is a need to aggregate and process data from multiple findings, Gartner analysts Mrudula Bangera, Martin Caren, Matt Crossley and Gregg Siegfried wrote in a findings .
“Telemetry pipelines enable the efficient collection, processing and delivery of such telemetry data, including logs, metrics and traces,” the analysts wrote. “Organizations should consider the need, cost, value and [return on investment] of telemetry pipelines as they consider their key functional areas.
“It is also worth considering the potential ‘lock-in’ risk of purchasing a telemetry pipeline from the same vendor as their main observability platform.”.
Organizations face both good and bad choices, and these decisions can be fraught with difficulty. Making the choices more challenging: a proliferation of observability options, vendors and cost considerations.
Many of these options — such as selecting tools for observability and telemetry data collection, especially given the high cost of storage — are seen as ways to improve operations. However, cost always remains a key consideration.
Observability vendors offer intelligent solutions that enhance insights, analytics and a host of other benefits. They offer tools and platforms that can simultaneously reduce costs, for example, by filtering out unnecessary telemetry data that is not useful for observability.
However, there is always risk in switching vendors, even when those vendors promise to reduce costs for these organizations.
As the Gartner analysts write, there are dozens of vendors in the observability market, and organizations frequently struggle to differentiate between them when choosing observability platforms to implement. Increasingly, core functions are commoditized with vendors opting to differentiate with higher-level functionality, such as generative AI (GenAI) assistance and cost optimization.
“Be cautious of focusing on functional areas of the specialized and differentiated layers that the organization is unlikely to adopt during the first year,” the Gartner team wrote. “The high cost of observability solutions makes time-to-value critical, and unsubscribing from unused capabilities may be costly or impossible.”.
Historically, the observability industry has had a “store it all” mentality, Jen Villa, director of product at Grafana Labs, told The New Stack.
“Whether it’s metrics, logs, traces, or profiles – especially at enterprise-level companies – daily data collection can easily surpass many millions of metrics series and petabytes of logs,” Villa mentioned.
“At its core, the ‘store it all’ approach is meant to ensure that when something goes wrong, teams have access to everything so they can pinpoint the exact location of the failure in their infrastructure,” she unveiled. “However, this has become increasingly infeasible as infrastructure becomes more complex and ephemeral; there is now just too much to collect without massive expense.”.
“Modern tools have become expensive because they’re still using a firehose to fill a water bottle, but the real opportunity lies in tools that can detect threats and issues with less data while maintaining effectiveness.” — J Stephen Kowski, SlashNext Email Security+.
Even if money is not an issue, collecting such vast quantities of data creates “needle in a haystack” problems during incident resolution, Villa stated. “Engineers have so much to sift through when they’re trying to resolve a problem that they don’t know where to start — they find themselves drowning in data, waiting on long-running queries that have to parse oceans of data.”.
So the real question in response to rising observability costs is, , “Do you really need all that data? And the answer is, you do not.
“You can store less of it or more compressed representations of it, and still get the same outcomes. You don’t need to sacrifice costs for capabilities.”.
Instead, Villa expressed, a proper solution should analyze and classify signals based on utility — through alerts, dashboards or queries — to automatically optimize low-value data through aggregation, saving consumers sometimes up to 80% on costs.
A proper observability platform can continually analyze telemetry data in order to have the most up-to-date picture of what is useful rather than a one-time, manual audit “that’s essentially stale as soon as it gets done,” Villa noted.
“It’s less about organizations wanting to pay less for observability tools, but they’re thinking more long-term about their investment and choosing platforms that will save them down the line,” she mentioned. “The more they save on data collection, the more they can reinvest into other areas of observability, including new signals like profiling that they might not have explored yet.”.
Moving from a “store it all” to a “store intelligently” strategy is not only the future of cost optimization, Villa mentioned, but can also help make the haystack of data smaller — and thus make it easier to find potentially the harmful needles that lay within.
Organizations’ needs and requirements vary significantly, of course. A database storage business will have different observability needs than an online retail grocery store. There’s no one-size-fits-all approach, J Stephen Kowski, field CTO at SlashNext Email Security+, told The New Stack.
“It isn’t so binary; this will vary situationally from firm to firm,” Kowski mentioned. “The ‘collect everything’ mindset from a decade ago has evolved, as smart organizations now focus on precision: collecting only the most meaningful data and using advanced AI to extract maximum value.
“Modern tools have become expensive because they’re still using a firehose to fill a water bottle, but the real opportunity lies in tools that can detect threats and issues with less data while maintaining effectiveness. The future winners in this space will be those who help clients optimize costs by focusing on high-signal data collection and intelligent analysis rather than just gathering more data.”.
In 2025, forward-thinking engineering teams are reshaping their approach to work, combining emerging technologies with new approaches to collaboration......
Controlling numerous URL redirects in IIS Manager operating on Windows Server systems proves difficult because it requires extended......
Transitioning Top-Layer Entries And The Display Property In CSS

Transitioning Top-Layer Entries And The Display Property In CSS.
We are getting spoiled with so many new capabilities involving animations with CSS, from @starting-style and transition-behavior — two properties that are absolutely welcome additions to your everyday work with CSS animations. We are getting spoiled with so many new capabilities involving animations with CSS, from scroll-driven animations to view transitions , and plenty of things in between. But it’s not always the big capabilities that make our everyday lives easier; sometimes, it’s those ease-of-life capabilities that truly enhance our projects. In this article, Brecht De Ruyte puts two capabilities on display:and— two properties that are absolutely welcome additions to your everyday work with CSS animations.
Animating from and to display: none; was something we could only achieve with JavaScript to change classes or create other hacks. The reason why we couldn’t do this in CSS is explained in the new CSS Transitions Level 2 specification:
“In Level 1 of this specification, transitions can only start during a style change event for elements that have a defined before-change style established by the previous style change event. That means a transition could not be started on an element that was not being rendered for the previous style change event.”.
In simple terms, this means that we couldn’t start a transition on an element that is hidden or that has just been created.
What Does transition-behavior: allow-discrete Do?
allow-discrete is a bit of a strange name for a CSS property value, right? We are going on about transitioning display: none , so why isn’t this named transition-behavior: allow-display instead? The reason is that this does a bit more than handling the CSS display property, as there are other “discrete” properties in CSS. A simple rule of thumb is that discrete properties do not transition but usually flip right away between two states. Other examples of discrete properties are visibility and mix-blend-mode . I’ll include an example of these at the end of this article.
To summarise, setting the transition-behavior property to allow-discrete allows us to tell the browser it can swap the values of a discrete property ([website], display , visibility , and mix-blend-mode ) at the 50% mark instead of the 0% mark of a transition.
The @starting-style rule defines the styles of an element right before it is rendered to the page. This is highly needed in combination with transition-behavior and this is why:
When an item is added to the DOM or is initially set to display: none , it needs some sort of “starting style” from which it needs to transition. To take the example further, popovers and dialog elements are added to a top layer which is a layer that is outside of your document flow, you can kind of look at it as a sibling of the element in your page’s structure. Now, when opening this dialog or popover, they get created inside that top layer, so they don’t have any styles to start transitioning from, which is why we set @starting-style . Don’t worry if all of this sounds a bit confusing. The demos might make it more clearly. The critical thing to know is that we can give the browser something to start the animation with since it otherwise has nothing to animate from.
At the moment of writing, the transition-behavior is available in Chrome, Edge, Safari, and Firefox. It’s the same for @starting-style , but Firefox currently does not support animating from display: none . But remember that everything in this article can be perfectly used as a progressive enhancement.
Now that we have the theory of all this behind us, let’s get practical. I’ll be covering three use cases in this article:
Animating from and to display: none in the DOM.
in the DOM. Animating dialogs and popovers entering and exiting the top layer.
More “discrete properties” we can handle.
Animating From And To display: none In The DOM.
For the first example, let’s take a look at @starting-style alone. I created this demo purely to explain the magic. Imagine you want two buttons on a page to add or remove list items inside of an unordered list.
Next, we add actions that add or remove those list items. This can be any method of your choosing, but for demo purposes, I quickly wrote a bit of JavaScript for it:
With this in place, we can write some CSS for our items to animate the removing part:
ul { li { transition: opacity [website], transform [website]; &.removing { opacity: 0; transform: translate(0, 50%); } } }.
This is great! Our .removing animation is already looking perfect, but what we were looking for here was a way to animate the entry of items coming inside of our DOM. For this, we will need to define those starting styles, as well as the final state of our list items.
First, let’s upgrade the CSS to have the final state inside of that list item:
ul { li { opacity: 1; transform: translate(0, 0); transition: opacity [website], transform [website]; &.removing { opacity: 0; transform: translate(0, 50%); } } }.
Not much has changed, but now it’s up to us to let the browser know what the starting styles should be. We could set this the same way we did the .removing styles like so:
ul { li { opacity: 1; transform: translate(0, 0); transition: opacity [website], transform [website]; @starting-style { opacity: 0; transform: translate(0, 50%); } &.removing { opacity: 0; transform: translate(0, 50%); } } }.
Now we’ve let the browser know that the @starting-style should include zero opacity and be slightly nudged to the bottom using a transform . The final result is something like this:
But we don’t need to stop there! We could use different animations for entering and exiting. We could, for example, modification our starting style to the following:
@starting-style { opacity: 0; transform: translate(0, -50%); }.
Doing this, the items will enter from the top and exit to the bottom. See the full example in this CodePen:
When To Use transition-behavior: allow-discrete.
In the previous example, we added and removed items from our DOM. In the next demo, we will show and hide items using the CSS display property. The basic setup is pretty much the same, except we will add eight list items to our DOM with the .hidden class attached to it:
Let’s put together everything we learned so far, add a @starting-style to our items, and do the basic setup in CSS:
ul { li { display: block; opacity: 1; transform: translate(0, 0); transition: opacity [website], transform [website]; @starting-style { opacity: 0; transform: translate(0, -50%); } &.hidden { display: none; opacity: 0; transform: translate(0, 50%); } } }.
This time, we have added the .hidden class, set it to display: none , and added the same opacity and transform declarations as we previously did with the .removing class in the last example. As you might expect, we get a nice fade-in for our items, but removing them is still very abrupt as we set our items directly to display: none .
This is where the transition-behavior property comes into play. To break it down a bit more, let’s remove the transition property shorthand of our previous CSS and open it up a bit:
ul { li { display: block; opacity: 1; transform: translate(0, 0); transition-property: opacity, transform; transition-duration: [website]; } }.
All that is left to do is transition the display property and set the transition-behavior property to allow-discrete :
ul { li { display: block; opacity: 1; transform: translate(0, 0); transition-property: opacity, transform, display; transition-duration: [website]; transition-behavior: allow-discrete; /* etc. */ } }.
We are now animating the element from display: none , and the result is exactly as we wanted it:
We can use the transition shorthand property to make our code a little less verbose:
transition: opacity [website], transform [website], display [website] allow-discrete;
You can add allow-discrete in there. But if you do, take note that if you declare a shorthand transition after transition-behavior , it will be overruled. So, instead of this:
transition-behavior: allow-discrete; transition: opacity [website], transform [website], display [website];
…we want to declare transition-behavior after the transition shorthand:
transition: opacity [website], transform [website], display [website]; transition-behavior: allow-discrete;
Otherwise, the transition shorthand property overrides transition-behavior .
Animating Dialogs And Popovers Entering And Exiting The Top Layer.
Let’s add a few use cases with dialogs and popovers. Dialogs and popovers are good examples because they get added to the top layer when opening them.
We’ve already likened the “top layer” to a sibling of the element, but you might also think of it as a special layer that sits above everything else on a web page. It’s like a transparent sheet that you can place over a drawing. Anything you draw on that sheet will be visible on top of the original drawing.
The original drawing, in this example, is the DOM. This means that the top layer is out of the document flow, which provides us with a few benefits. For example, as I stated before, dialogs and popovers are added to this top layer, and that makes perfect sense because they should always be on top of everything else. No more z-index: 9999 !
z-index is irrelevant : Elements on the top layer are always on top, regardless of their z-index value.
: Elements on the top layer are always on top, regardless of their value. DOM hierarchy doesn’t matter : An element’s position in the DOM doesn’t affect its stacking order on the top layer.
: An element’s position in the DOM doesn’t affect its stacking order on the top layer. Backdrops: We get access to a new ::backdrop pseudo-element that lets us style the area between the top layer and the DOM beneath it.
Hopefully, you are starting to understand the importance of the top layer and how we can transition elements in and out of it as we would with popovers and dialogues.
Transitioning The Dialog Element In The Top Layer.
The following HTML contains a button that opens a element, and that element contains another button that closes the . So, we have one button that opens the and one that closes it.
A lot is happening in HTML with invoker commands that will make the following step a bit easier, but for now, let’s add a bit of JavaScript to make this modal actually work:
I’m using the following styles as a starting point. Notice how I’m styling the ::backdrop as an added bonus!
dialog { padding: 30px; width: 100%; max-width: 600px; background: #fff; border-radius: 8px; border: 0; box-shadow: rgba(0, 0, 0, [website] 0px 19px 38px, rgba(0, 0, 0, [website] 0px 15px 12px; &::backdrop { background-image: linear-gradient( 45deg in oklab, oklch(80% [website] 222) 0%, oklch(35% [website] 313) 100% ); } }.
This results in a pretty hard transition for the entry, meaning it’s not very smooth:
Let’s add transitions to this dialog element and the backdrop. I’m going a bit faster this time because by now, you likely see the pattern and know what’s happening:
dialog { opacity: 0; translate: 0 30%; transition-property: opacity, translate, display; transition-duration: [website]; transition-behavior: allow-discrete; &[open] { opacity: 1; translate: 0 0; @starting-style { opacity: 0; translate: 0 -30%; } } }.
When a dialog is open, the browser slaps an open attribute on it:
And that’s something else we can target with CSS, like dialog[open] . So, in this case, we need to set a @starting-style for when the dialog is in an open state.
Let’s add a transition for our backdrop while we’re at it:
dialog { /* etc. */ &::backdrop { opacity: 0; transition-property: opacity; transition-duration: 1s; } &[open] { /* etc. */ &::backdrop { opacity: [website]; @starting-style { opacity: 0; } } } }.
Now you’re probably thinking: A-ha! But you should have added the display property and the transition-behavior: allow-discrete on the backdrop!
But no, that is not the case. Even if I would change my backdrop pseudo-element to the following CSS, the result would stay the same:
&::backdrop { opacity: 0; transition-property: opacity, display; transition-duration: 1s; transition-behavior: allow-discrete; }.
It turns out that we are working with a ::backdrop and when working with a ::backdrop , we’re implicitly also working with the CSS overlay property, which specifies whether an element appearing in the top layer is currently rendered in the top layer.
And overlay just so happens to be another discrete property that we need to include in the transition-property declaration:
dialog { /* etc. */ &::backdrop { transition-property: opacity, display, overlay; /* etc. */ }.
Unfortunately, this is currently only supported in Chromium browsers, but it can be perfectly used as a progressive enhancement.
And, yes, we need to add it to the dialog styles as well:
dialog { transition-property: opacity, translate, display, overlay; /* etc. */ &::backdrop { transition-property: opacity, display, overlay; /* etc. */ }.
It’s pretty much the same thing for a popover instead of a dialog. I’m using the same technique, only working with popovers this time:
There are a few other discrete properties besides the ones we covered here. If you remember the second demo, where we transitioned some items from and to display: none , the same can be achieved with the visibility property instead. This can be handy for those cases where you want items to preserve space for the element’s box, even though it is invisible.
So, here’s the same example, only using visibility instead of display .
See the Pen [Transitioning the visibility property [forked]]([website]) by utilitybend. See the Pen Transitioning the visibility property [forked] by utilitybend.
The CSS mix-blend-mode property is another one that is considered discrete. To be completely honest, I can’t find a good use case for a demo. But I went ahead and created a somewhat trite example where two mix-blend-mode s switch right in the middle of the transition instead of right away.
See the Pen [Transitioning mix-blend-mode [forked]]([website]) by utilitybend. See the Pen Transitioning mix-blend-mode [forked] by utilitybend.
That’s an overview of how we can transition elements in and out of the top layer! In an ideal world, we could get away without needing a completely new property like transition-behavior just to transition otherwise “un-transitionable” properties, but here we are, and I’m glad we have it.
But we also got to learn about @starting-style and how it provides browsers with a set of styles that we can apply to the start of a transition for an element that’s in the top layer. Otherwise, the element has nothing to transition from at first render, and we’d have no way to transition them smoothly in and out of the top layer.
Editor’s note: This article is outside the typical range of topics we normally cover around here and touches on sensitive topics including recollectio......
Geoff’s post about the CSS Working Group’s decision to work on inline conditionals inspired some drama in the comments section. Some developers are ex......
Svelte 5 And The Future Of Frameworks: A Chat With Rich Harris.
After months of anticipation, debate, and even......
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
7.5% | 9.0% | 9.4% | 10.5% | 11.0% | 11.4% | 11.5% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
10.8% | 11.1% | 11.3% | 11.5% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Enterprise Software | 38% | 10.8% |
Cloud Services | 31% | 17.5% |
Developer Tools | 14% | 9.3% |
Security Software | 12% | 13.2% |
Other Software | 5% | 7.5% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Microsoft | 22.6% |
Oracle | 14.8% |
SAP | 12.5% |
Salesforce | 9.7% |
Adobe | 8.3% |
Future Outlook and Predictions
The Agents Starting Automate landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
Expert Perspectives
Leading experts in the software dev sector provide diverse perspectives on how the landscape will evolve over the coming years:
"Technology transformation will continue to accelerate, creating both challenges and opportunities."
— Industry Expert
"Organizations must balance innovation with practical implementation to achieve meaningful results."
— Technology Analyst
"The most successful adopters will focus on business outcomes rather than technology for its own sake."
— Research Director
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing software dev challenges:
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of software dev evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Rapid adoption of advanced technologies with significant business impact
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Measured implementation with incremental improvements
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and organizational barriers limiting effective adoption
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.