Technology News from Around the World, Instantly on Oracnoos!

Aerospike Debuts High-Performance Distributed ACID Transactions - Related to is, copilot:, finding, its, source

GitHub Copilot: The agent awakens

GitHub Copilot: The agent awakens

When we introduced GitHub Copilot back in 2021, we had a clear goal: to make developers’ lives easier with an AI pair programmer that helps them write more effective code. The name reflects our belief that artificial intelligence (AI) isn’t replacing the developer. Instead, it’s always on their side. And like any good first officer, Copilot can also fly by itself: for example, when providing pull request feedback, autofixing security vulnerabilities, or brainstorming on how to implement an issue.

Today, we are upgrading GitHub Copilot with the force of even more agentic AI – introducing agent mode and announcing the General Availability of Copilot Edits, both in VS Code. We are adding Gemini [website] Flash to the model picker for all Copilot clients. And we unveil a first look at Copilot’s new autonomous agent, codenamed Project Padawan. From code completions, chat, and multi-file edits to workspace and agents, Copilot puts the human at the center of the creative work that is software development. AI helps with the things you don’t want to do, so you have more time for the things you do.

GitHub Copilot’s new agent mode is capable of iterating on its own code, recognizing errors, and fixing them automatically. It can suggest terminal commands and ask you to execute them. It also analyzes run-time errors with self-healing capabilities.

In agent mode, Copilot will iterate on not just its own output, but the result of that output. And it will iterate until it has completed all the subtasks required to complete your prompt. Instead of performing just the task you requested, Copilot now has the ability to infer additional tasks that were not specified, but are also necessary for the primary request to work. Even improved, it can catch its own errors, freeing you up from having to copy/paste from the terminal back into chat.

Here’s an example where GitHub Copilot builds a web app to track marathon training:

To get started, you’ll need to download VS Code Insiders and then enable the agent mode setting for GitHub Copilot Chat:

Then, when in the Copilot Edits panel, switch from Edit to Agent right next to the model picker:

Agent mode will change the way developers work in their editor; and as such, we will bring it to all IDEs that Copilot supports. We also know that today’s Insiders build isn’t perfect, and welcome your feedback as we improve both VS Code and the underlying agentic technology in the coming months.

introduced at GitHub Universe in October last year, Copilot Edits combines the best of Chat and Inline Chat with a conversational flow and the ability to make inline changes across a set of files that you manage. The feedback you provided in the past was instrumental in shipping this feature as GA in VS Code today. Thank you!

In Copilot Edits you specify a set of files to be edited, and then use natural language to ask GitHub Copilot for what you need. Copilot Edits makes inline changes in your workspace, across multiple files, using a UI designed for fast iteration. You stay in the flow of your code while reviewing the suggested changes, accepting what works, and iterating with follow-up asks.

Behind the scenes, Copilot Edits leverages a dual-model architecture to enhance editing efficiency and accuracy. First, a foundation language model considers a full context of the Edits session to generate initial edit suggestions. You can choose the foundation language model that you prefer between: OpenAI’s GPT-4o, o1, o3-mini, Anthropic’s Claude [website] Sonnet, and now, Google’s Gemini [website] Flash. For the optimal experience, we developed a speculative decoding endpoint, optimized for fast application of changes in files. The proposed edits from the foundation model are sent to the speculative decoding endpoint that will then propose those changes inline in the editor.

Using your voice is a natural experience while using Copilot Edits. Just talking to Copilot makes the back and forth smooth and conversational. It almost feels like interacting with a colleague with area expertise, using the same kind of iterative flow that you would use in real-life pair programming.

Next on our roadmap is to improve the performance of the apply changes speculative decoding endpoint, support transitions into Copilot Edits from Copilot Chat by preserving context, suggest files to the working set, and allow you to undo suggested chunks. If you want to be among the first to get your hands on these improvements, make sure to use VS Code Insiders and the pre-release version of the GitHub Copilot Chat extension. To help improve the feature, please file issues in our repo.

Beyond the GA in VS Code, Copilot Edits is now in preview for Visual Studio 2022.

SWE agents, first introduced in this paper, are a type of AI-driven or automated system that assists (or acts on behalf of) software engineers. They can perform various development tasks, like generating and reviewing code, refactoring or optimizing the codebase, automating workflows like tests or pipelines, and providing guidance on architecture, error troubleshooting, and best practices. They are intended to offload some of the routine or specialized tasks of a software engineer, giving them more time to focus on higher value work. The performance of SWE agents is often measured against SWE-bench, a dataset of 2,294 Issue-Pull Request pairs from 12 popular Python repos on GitHub.

We’re excited to share a first look at our autonomous SWE agent and how we envision these types of agents will fit into the GitHub user experience. When the product we are building under the codename Project Padawan ships later this year, it will allow you to directly assign issues to GitHub Copilot, using any of the GitHub clients, and have it produce fully tested pull requests. Once a task is finished, Copilot will assign human reviewers to the PR, and work to resolve feedback they add. In a sense, it will be like onboarding Copilot as a contributor to every repository on GitHub. ✨.

Behind the scenes, Copilot automatically spins up a secure cloud sandbox for every task it’s assigned. It then asynchronously clones the repository, sets up the environment, analyzes the codebase, edits the necessary files, and builds, tests, and lints the code. Additionally, Copilot takes into account any discussion within the issue or PR, and any custom instruction within the repository, so it understands the full intent of its task, as well as the guidelines and conventions of the project.

And just as we did with Copilot Extensions and the model picker in Copilot, we will also provide opportunities to integrate into this AI-native workflow and work closely with partners and end-customers in a tight feedback loop. We believe the end-state of Project Padawan will result in transforming how teams manage critical-yet-mundane tasks, such as fixing bugs or creating and maintaining automated tests. Because ultimately, it’s all about empowering developers by allowing them to focus on what matters, and letting copilots do the rest. And don’t worry. We will have patience, so the agent won’t turn to the dark side. 😉.

Awaken the agent with agent mode for GitHub Copilot in VS Code today.

We're a place where coders share, stay up-to-date and grow their careers....

Cloud infrastructure starts simple—but as teams scale, Terraform scripts become harder to maintain. A monolithic setu......

Aerospike Debuts High-Performance Distributed ACID Transactions

Aerospike Debuts High-Performance Distributed ACID Transactions

The traditional trade-off for distributed databases with high write speeds was availability for consistency. Version 8 of Aerospike’s performant multimodal database, which was unveiled Wednesday, helps dispel this notion by offering real-time distributed ACID transactional support at scale.

Already known for its high-performance online transactional processing (OLTP), Aerospike’s engine has been updated with key aspects that are ideal for ensuring consistency without sacrificing speed. In addition to providing distributed ACID transactions, version 8 guarantees strict serializability of those transactions.

There are also intuitive transaction APIs that allow for multiple operations within a transaction while simplifying the developer experience.

, the objective of the release is to “move, collectively, the field forward for having higher-performance databases which also support consistency. And, we try to minimize that compromise of performance and availability while you’re adding strong consistency.”.

Aerospike’s ACID properties ensure transactions don’t interfere with each other while producing well-understood results. This point is critical to organizations in regulated spaces like finance, which process what Srinivasan estimated is up to hundreds of millions of transactions — each of which possibly contains multiple records — each second.

Such organizations are “using us for high performance, but they need to denormalize the data and put it in a single record,” Srinivasan revealed. “And, if they have a necessity to link multiple records together, while still keeping them separate for regulatory reasons, that requires you to implement proper transactions, which is what Aerospike 8 does.”.

Most importantly, the updated engine shifts the onus of maintaining consistency from the application level to the database level, liberating developers from such vital concerns.

Prior to unveiling Aerospike Database 8, Aerospike provided transactional consistency for single-record operations. The distributed ACID characteristics of the new version supply consistency for more sophisticated transactions. “When you add the multirecord ACID distributed transaction support, you can change multiple records within the same transaction,” Srinivasan explained. Moreover, developers can realize the atomicity, consistency, isolation and durability (ACID) benefits for respective transactions across distributed systems spanning clouds, data centers and geographic locations.

Atomicity ensures transactions either do or don’t happen. Isolation means other transactions don’t access the records a transaction is currently accessing. Durability means the system won’t lose the data. Most importantly, these boons are provided for high-performance applications. Aerospike’s “algorithms to provide consistency are crafted to provide higher availability than many other algorithms,” Srinivasan expressed. “That’s actually unique.”.

The strict serializability of Aerospike Database 8’s distributed ACID transactions is also a key feature for developers. This property, which Srinivasan mentioned guarantees the order of transactions are executed in the database in the order in which they occur, means addressing these issues isn’t part of the app-building process. If an organization is transferring funds from one bank account to another and withdrawing money from the latter in a series of operations, with strict serializability, “If a transaction finishes before another one starts, that is exactly how the database will execute it,” Srinivasan mentioned.

Strict serializability means each new transaction accessing the database is updated with changes to the database made by previous transactions. Additionally, Aerospike’s strict serializability for multirecord transactions doesn’t compromise the performance of the single-record transaction support the database previously had. In fact, it achieves the former without “slowing down the single records,” Srinivasan commented.

Aerospike Database 8’s new elements transfer the burden of ensuring consistency from the applications relying on the database to the database itself. This development is meaningful for two reasons. Firstly, it results in more dependable applications, reliable uptime and enhanced performance. , many algorithms designed to provide consistency in Aerospike can be implemented at the application level. “What that would mean is the applications would have to keep track of the state of every transaction they’re executing outside the database,” Srinivasan revealed. “And then, if the application server dies, then you lose state. So, it’s very, very hard to avoid data loss.”.

Secondly, it’s difficult to identify bugs in distributed systems, which could create problems with the order in which transactions are executed. In addition to furnishing the aforementioned guarantees for consistency and the proper order of transactions, Aerospike supplies other tools to maintain consistency at the database level.

, resources such as Jepsen’s testing capabilities enable “a third-party application developer to check, ‘Hey, this database, does it work? Is it a proof for the algorithm?’ It makes it easy for application programmers. They don’t have to do all the hard work. They just write the apps and they can depend on these guarantees, and they can get verification that these are indeed being met.”.

Aerospike Database 8 also contains a transaction API that’s useful for enabling complex transactions for OLTP systems. With the API, once a transaction begins, it’s possible to do a number of operations in it before the transaction end phase is reached. “At that point, you’re not guaranteed that the transaction will commit, because until that time somebody else might have interfered,” Srinivasan noted. “But, that’s all done at the end-transaction phase. You basically put an envelope around all kinds of operations you’re doing on the database. That’s the API.”.

Aerospike Database 8 also supports Spring to improve the developer experience of using this framework with the database. , “Application developers can just program in Spring, and then, underneath the covers, we provide a library which translates the Spring API’s application call into underlying API calls at the database level. The Spring developer doesn’t need to know the APIs of the Aerospike database.”.

Many NoSQL databases started out prioritizing availability over consistency before gradually adding properties for the latter. Aerospike’s distinction is that it is a distributed, high-performance multimodal database (with support for vectors, key-value, graph formats and document formats) that enables consistency for sophisticated, multirecord transactions.

With its consistency guarantees, it allows developers to concentrate on building the best logic for their applications without compromising their productivity, or progress, by worrying about concerns that are now handled at the database level.

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your suppo......

The global enterprise AI market is expanding rapidly, and more and more businesses are exploring AI’s potential to drive innovation and efficiency. Th......

We're a place where coders share, stay up-to-date and grow their careers....

Open source AI is already finding its way into production

Open source AI is already finding its way into production

Open source has long driven innovation and the adoption of cutting-edge technologies, from web interfaces to cloud-native computing. The same is true in the burgeoning field of open source artificial intelligence (AI). Open source AI models, and the tooling to build and use them, are multiplying, enabling developers around the world to build custom AI solutions and host them where they choose.

But the survey results suggest that the use of open source AI models is already surprisingly widespread—and this is expected to grow as more models proliferate and more use cases emerge. Let’s take a look at the rise of open source AI, from the increasing rise of smaller models to use cases in generative AI.

Explore how and why companies are using open source AI models in production today.

Learn how open source is changing the way developers use AI.

Look ahead at how small, open source models might be used in the future.

Open, or at least less-proprietary, models like the DeepSeek models, Meta’s Llama models, or those from Mistral AI can generally be downloaded and run on your own devices and, depending on the license, you can study and change how they work. Many are trained on smaller, more focused data sets. These models are sometimes referred to as small language models (SLMs), and they’re beginning to rival the performance of LLMs in some scenarios.

Study how the system works and inspect its components.

Modify the system for any purpose, including to change its output.

Information on all training data and where to obtain it Notably, this definition remains hotly debated as some models described as open source don’t disclose training code or data and may have some usage restrictions. It might be best to consider openness a spectrum, with some models more open than others.

There are a number of benefits of working with these smaller models, explains Head of GitHub Next, Idan Gazit. They cost less to run and can be run in more places, including end-user devices. But perhaps most importantly, they’re easier to customize.

While LLMs excel with general purpose chatbots that need to respond to a wide variety of questions, organizations tend to turn to smaller AI models when they need niche solutions, explains Hamel Husain, an AI consultant and former GitHub employee. For instance, with an open source LLM you can define a grammar and require that a model only outputs valid tokens .

“Open models aren’t always improved, but the more narrow your task, the more open models will shine because you can fine tune that model and really differentiate them,” says Husain.

For example, an observability platform enterprise hired Husain to help build a solution that could translate natural language into the enterprise’s custom query language to make it easier for individuals to craft queries without having to learn the ins-and-outs of the query language.

This was a narrow use case—they only needed to generate their own query language and no others, and they needed to ensure it produced valid syntax. “Their query language is not something that is prevalent as let’s say Python, so the model hadn’t seen many examples,” Husain says. “That made fine tuning more helpful than it would have been with a less esoteric topic.” The business also wanted to maintain control over all data handled by the LLM without having to work with a third party.

Husain ended up building a custom solution using the then-latest version of Mistral AI’s widely used open models. “I typically use popular models because they’ve generally been fine-tuned already and there’s usually a paved path towards implementing them,” he says.

Open source brings structure to the world of LLMs.

One place you can see the rapid adoption of open source models is in tools designed to work with them. For example, Outlines is an increasingly popular tool for building custom LLM applications with both open source and proprietary models. It helps developers define structures for LLM outputs. You can use it, for example, to ensure an LLM outputs responses in JSON format. It was created in large part because of the need for finely tuned, task-specific AI applications.

At a previous job, Outlines co-creator and maintainer Rémi Louf needed to extract some information from a large collection of documents and export it in JSON format. He and his colleague Brandon Willard tried using general purpose LLMs like ChatGPT for the task, but they had trouble producing well-structured JSON outputs. Louf and Willard both had a background in compilers and interpreters, and noticed a similarity between building compilers and structuring the output of LLMs. They built Outlines to solve their own problems.

They posted the project to Hacker News and it took off quickly. “It turns out that a lot of other people were frustrated with not being able to use LLMs to output to a particular structure reliably,” Louf says. The team kept working on it, expanding its aspects and founding a startup. It now has more than 100 contributors and helped inspire OpenAI’s structured outputs feature.

“I can’t give names, but some very large companies are using Outlines in production,” Louf says.

There are, of course, downsides to building custom solutions with open source models. One of the biggest is the need to invest time and resources into prompt construction. And, depending on your application, you may need to stand up and manage the underlying infrastructure as well. All of that requires more engineering resources than using an API.

“Sometimes organizations want more control over their infrastructure,” Husain says. “They want predictable costs and latency and are willing to make decisions about those tradeoffs themselves.”.

While open source AI models might not be a good fit for every problem, it’s still the early days. As small models continue to improve, new possibilities emerge, from running models on local hardware to embedding custom LLMs within existing applications.

Fine-tuned small models can already outperform larger models for certain tasks. Gazit expects developers will combine different small, customized models together and use them to complete different tasks. For example, an application might route a prompt with a question about the best way to implement a database to one model, while routing a prompt for code completion to another. “The strengths of many Davids might be mightier than one Goliath,” he says.

In the meantime, large, proprietary models will also keep improving, and you can expect both large and small model development to feed off of each other. “In the near term, there will be another open source revolution,” Louf says. “Innovation often comes from people who are resource constrained.”.

We're a place where coders share, stay up-to-date and grow their careers....

🚀 RootAlert: Real-time Exception Tracking & Alerts for .NET!

RootAlert is a powerful, lightweight real-time error tracking and alerting library for .......

In 2025, forward-thinking engineering teams are reshaping their approach to work, combining emerging technologies with new approaches to collaboration......

Market Impact Analysis

Market Growth Trend

2018201920202021202220232024
7.5%9.0%9.4%10.5%11.0%11.4%11.5%
7.5%9.0%9.4%10.5%11.0%11.4%11.5% 2018201920202021202220232024

Quarterly Growth Rate

Q1 2024 Q2 2024 Q3 2024 Q4 2024
10.8% 11.1% 11.3% 11.5%
10.8% Q1 11.1% Q2 11.3% Q3 11.5% Q4

Market Segments and Growth Drivers

Segment Market Share Growth Rate
Enterprise Software38%10.8%
Cloud Services31%17.5%
Developer Tools14%9.3%
Security Software12%13.2%
Other Software5%7.5%
Enterprise Software38.0%Cloud Services31.0%Developer Tools14.0%Security Software12.0%Other Software5.0%

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity:

Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity AI/ML Blockchain VR/AR Cloud Mobile

Competitive Landscape Analysis

Company Market Share
Microsoft22.6%
Oracle14.8%
SAP12.5%
Salesforce9.7%
Adobe8.3%

Future Outlook and Predictions

The Github Copilot Agent landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:

Year-by-Year Technology Evolution

Based on current trajectory and expert analyses, we can project the following development timeline:

2024Early adopters begin implementing specialized solutions with measurable results
2025Industry standards emerging to facilitate broader adoption and integration
2026Mainstream adoption begins as technical barriers are addressed
2027Integration with adjacent technologies creates new capabilities
2028Business models transform as capabilities mature
2029Technology becomes embedded in core infrastructure and processes
2030New paradigms emerge as the technology reaches full maturity

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:

Time / Development Stage Adoption / Maturity Innovation Early Adoption Growth Maturity Decline/Legacy Emerging Tech Current Focus Established Tech Mature Solutions (Interactive diagram available in full report)

Innovation Trigger

  • Generative AI for specialized domains
  • Blockchain for supply chain verification

Peak of Inflated Expectations

  • Digital twins for business processes
  • Quantum-resistant cryptography

Trough of Disillusionment

  • Consumer AR/VR applications
  • General-purpose blockchain

Slope of Enlightenment

  • AI-driven analytics
  • Edge computing

Plateau of Productivity

  • Cloud infrastructure
  • Mobile applications

Technology Evolution Timeline

1-2 Years
  • Technology adoption accelerating across industries
  • digital transformation initiatives becoming mainstream
3-5 Years
  • Significant transformation of business processes through advanced technologies
  • new digital business models emerging
5+ Years
  • Fundamental shifts in how technology integrates with business and society
  • emergence of new technology paradigms

Expert Perspectives

Leading experts in the software dev sector provide diverse perspectives on how the landscape will evolve over the coming years:

"Technology transformation will continue to accelerate, creating both challenges and opportunities."

— Industry Expert

"Organizations must balance innovation with practical implementation to achieve meaningful results."

— Technology Analyst

"The most successful adopters will focus on business outcomes rather than technology for its own sake."

— Research Director

Areas of Expert Consensus

  • Acceleration of Innovation: The pace of technological evolution will continue to increase
  • Practical Integration: Focus will shift from proof-of-concept to operational deployment
  • Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
  • Regulatory Influence: Regulatory frameworks will increasingly shape technology development

Short-Term Outlook (1-2 Years)

In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing software dev challenges:

  • Technology adoption accelerating across industries
  • digital transformation initiatives becoming mainstream

These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.

Mid-Term Outlook (3-5 Years)

As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:

  • Significant transformation of business processes through advanced technologies
  • new digital business models emerging

This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.

Long-Term Outlook (5+ Years)

Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:

  • Fundamental shifts in how technology integrates with business and society
  • emergence of new technology paradigms

These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.

Key Risk Factors and Uncertainties

Several critical factors could significantly impact the trajectory of software dev evolution:

Technical debt accumulation
Security integration challenges
Maintaining code quality

Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.

Alternative Future Scenarios

The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:

Optimistic Scenario

Rapid adoption of advanced technologies with significant business impact

Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.

Probability: 25-30%

Base Case Scenario

Measured implementation with incremental improvements

Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.

Probability: 50-60%

Conservative Scenario

Technical and organizational barriers limiting effective adoption

Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.

Probability: 15-20%

Scenario Comparison Matrix

FactorOptimisticBase CaseConservative
Implementation TimelineAcceleratedSteadyDelayed
Market AdoptionWidespreadSelectiveLimited
Technology EvolutionRapidProgressiveIncremental
Regulatory EnvironmentSupportiveBalancedRestrictive
Business ImpactTransformativeSignificantModest

Transformational Impact

Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.

The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.

Implementation Challenges

Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.

Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.

Key Innovations to Watch

Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.

Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.

Technical Glossary

Key technical terms and definitions to help understand the technologies discussed in this article.

Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.

Filter by difficulty:

infrastructure as code intermediate

algorithm

algorithm intermediate

interface

framework intermediate

platform

interface intermediate

encryption Well-designed interfaces abstract underlying complexity while providing clearly defined methods for interaction between different system components.

API beginner

API APIs serve as the connective tissue in modern software architectures, enabling different applications and services to communicate and share data according to defined protocols and data formats.
API concept visualizationHow APIs enable communication between different software systems
Example: Cloud service providers like AWS, Google Cloud, and Azure offer extensive APIs that allow organizations to programmatically provision and manage infrastructure and services.

platform intermediate

cloud computing Platforms provide standardized environments that reduce development complexity and enable ecosystem growth through shared functionality and integration capabilities.