Technology News from Around the World, Instantly on Oracnoos!

Gemini Code Assist Review: Code Completions Need Improvement - Related to interfaces, review:, future, gemini, extreme

Neovim’s Future Could Have AI and Brain-Computer Interfaces

Neovim’s Future Could Have AI and Brain-Computer Interfaces

Berlin-based Neovim maintainer Justin M. Keyes shared a strange phrase to open his traditional “State of Neovim” keynote at the annual Vim conference in Tokyo. (Neovim is a modern refactoring of the Vim text editor.) It was something he’d overheard a random sports fan saying that seemed to hold “some sort of symbolism”:

“He’s giving us a memo,” Keyes told his audience. “He’s telling us that we need to move fast.” So the driving question now, as Keyes sees it, is not just how to be the best Vim-like modal text editor, but how does Neovim compete with other projects like VS Code and Zed?

So as thousands of Neovim fans watched, Keyes shared his chart of what’s been propelling the project over the years. It started with Neovim’s technical architecture (and multithreading support), and as it gained momentum, there was more conscious project management for “channeling energy into useful directions.”.

But by 2024, Neovim was ready to survey the larger market of all text editors, “looking for signals from — you know, what is the world telling us, what is the universe telling us.”.

And for Keyes that means “thinking about brain-computer interfaces, thinking about the architecture of other projects like VS Code and Zed, thinking about how we could maybe leverage Zig and things like that.”.

With mind-boggling ambition, Keyes offered his audience a forward-looking assessment of Neovim — but also some wide-ranging thoughts on the computing landscape in general. Eyes on the future, he shared not only his thoughts on new capabilities and coming changes for Neovim, but also on the role of AI in text editors, and even the possibility of a WebAssembly-based Neovim artifact that could be used in other software.

And yes, brain-computer interfaces kept coming up.

“In 10 years, probably, brain-computer interfaces will be not uncommon,” Keyes showcased matter-of-factly, “and keyboards are going to be more of a fallback input method.

“This is kind of interesting to think about, not only for Vim and Neovim, but Zed and VS Code and other types of development tools.”.

Even in that far-away future, Keyes thinks Vim-like editors, with their macro-friendly programmability and logically structured interfaces, will remain relevant “for at least the first couple of generations of brain-computer interfaces, even if the literal keys on a keyboard are no longer relevant!”.

And in this world with brain-computer interfaces, “buttons and menus are going to be even more outdated than keyboard-driven interfaces.” So while Vim-like editors let customers choose different “modes” for text editing — like “visual” for selecting text chunks — Keyes is surprised modes don’t seem to exist in Zed and VS Code.

Which they’ll need when brain-computer interfaces come along.

Keyes is aware that rival editors like VS Code and Cursor are including some AI capabilities, but he’s already looking ahead to the future. “Eventually, it’ll find its way hopefully into Neovim, if we set things up in the way that we should. And that’s our job, is to see what the gaps are, so we can help either third-party extensions give the kind of context that is needed to AI extensions, or possibly build some primitives into our standard library if it comes to that for that.” (Keyes also believes that AI “is a feature; it’s not a product.”).

I desperately want neovim to win the AI race. — Benjamin Scott (@TheBenzend) February 21, 2025.

But Keyes described himself as “excited about AI,” and even put up an example of a prompt he’d used that successfully generated a first pass at a Neovim function. “That’s useful,” he noted, “And that is why our documentation is crucial.”.

“If you don’t explain/document things for humans, the AI will also be weaker.”.

Keyes added thoughtfully that AI is “an extra brain.” And he’s excited about AI.

Looking to Neovim’s more immediate future, Keyes presented his proposals for next year, “the things that I really, really, really want to solve next year.” And top of his list? “Press-Enter needs to go away,” he noted, referring to a combination of keys that need to be hit to confirm exceptions. Keyes called these mandatory confirmations “evil,” and “the reason that people think other projects are more stable.” He added: “When exceptions get thrown in VS Code, VS Code does not, like, send you an email and print out a fax, or whatever. It just logs it. That’s what we should do.”.

things you can, and should, do with Neovim:

cursor trails[website] [website] — Justin M. Keyes (@justinmk) December 4, 2024.

Another coming feature was inspired by tmux, the terminal multiplexer. “Now soon what will land is that you can just hit one Ctrl+Z and detach your UI from any Neovim session.”.

Keyes also put up a list of “stuff that is relatively easy to do, that we should just do. It just makes the editor a complete answer, a complete application.” For example, when people drag and drop a file into Neovim, or paste in an image or a URL, “it should do something useful.” He also wants to get started on an API for images. And there should be profiling and debugging for the Lua scripting language.

Keyes described much of his list as “aspirational … except for presentation mode.” When Neovim opens a file that’s been formatted with Markdown, Keyes wants to see an easy way to toggle between formatted and unformatted text. “I propose maybe Z+Tab or Backspace as the keys,” Keyes presented. “Maybe even Help docs, and I don’t know what else — but at least Markdown. Markdown is the JSON of the docs world. It’s just … it’s everywhere. You need to support it.”.

Keyes reminded the audience of his favorite site for downloading Neovim plug-ins, while adding as another aside that there could even be some kind of Neovim package format “hopefully next year.”.

He returned to it later with a slide with just one line:

It referred to the new packaging format Keyes is working on.

“I do think that we should get around to trying out this package format and seeing what happens with it. It’s a low-cost thing to try out. There’s like 5% remaining to do on the spec that I just need to finish, and then we can see where that goes.”.

But Keyes also took a moment for some what-if scenarios. What if Vim’s modal text editing became a library, allowing it to be integrated into projects? Keyes’ response? “That’s one way things could go,” but another direction would be if Neovim itself became “consumable” by other projects. Maybe Neovim could have its own WebAssembly artifact offering speedy modal-text-editing functionality, “and just text editing in general.”.

And this leads Keyes to an interesting aside. “You need interactive commands, and any project that doesn’t start out with this ends up adding it in some kind of limp form later on.”.

Keyes reminded his audience that VS Code’s documentation acknowledges it “collects telemetry data, which is used to help understand how to improve the product.” But Neovim is popular “even though we never hired any data-science witch doctors to tell us what the individuals want.

“Actually it turns out you can get a pretty good signal about that from the issue tracker, social media and also your own intuition.”.

And as proof, Keyes shared that Neovim had reached another milestone. “We have doubled the number of GitHub downloads since last year. That’s some sort of signal.”.

“It could even be from bots, or whatever. It doesn’t matter, because guess what? We have twice as many bots as we did last year downloading from GitHub!”.

And for the Homebrew installer, “for the first year ever we have more installs than Vim itself.” Keyes’ slide presents 373,000 downloads for Neovim and 296,000 for Vim — where in 2023, Vim’s 238,000 downloads were 20,000 more than Neovim.

It all seemed to prove that the state of Neovim is strong. In perhaps the ultimate sign of health, even its contributor count is growing. “And for the fourth year in a row, we were ‘Most Loved’ on Stack Overflow,” Keyes added. “Whatever that means. We have no idea, but we’re winning it every year, and so it’s very essential. Until we stop winning it!”.

“This is all you need. You don’t need telemetry.”.

The first release of the year is packed with functions to make your knowledge-sharing community superior.

As we step into 2025, we’re kicking things off......

Improved security (now the executable isn't shut down we simply free the memory adress the ai tries to modify), fixed bugs(eg. identity system), vocab......

The goal of content design is to reduce confusion and improve clarity. ......

Gemini Code Assist Review: Code Completions Need Improvement

Gemini Code Assist Review: Code Completions Need Improvement

It was never going to be long before Google got into the game of code assistance with Gemini. The headline is the number of completions being offered for free on their platform — 90x what GitHub Copilot offers — and behind that, the understanding that scale is something Google does well. So this is the same play as Gmail giving every user a much larger chunk of space than competitors, back when it launched in 2004.

Gemini Code Assist asserts it supports 20+ languages, which again is a strong offering at scale. But as Google doesn’t offer its own IDE, they are likely to be dependent in many cases on Microsoft’s Visual Studio Code (VS Code). I’m beginning to wonder if alternatives like JetBrains are getting a massive boost for this reason. However, the default seems to be VS Code:

You may have seen how I moved code assistants from Copilot to Augment, and I will do the same thing now — but shifting from Augment to Gemini Code Assist, in order to check it out.

I opened VS Code on my MacBook M4 and immediately searched for the extension, freshly available on that day:

Loading the extension appeared to take some time, although there is no progress meter on VS Code. Of course, the servers will have been hammered on day one for a new version.

There was a welcome page, but nothing about setup. As I hadn’t even signed into Google, it was highly unlikely to actually be ready. The left side bar had the Gemini icon, and selecting it did fill the sidebar with a request to log in. But this just underlines what I’ve stated previously: the customer journey with extension loading code assistants is weak within VS Code.

I was thrown into a web page to sign in, and navigated back to my IDE to now see the following:

While the sidebar was controlled by Gemini, I still didn’t know who was controlling the code completions. The bottom toolbar seemed to suggest it may be cohabiting with Augment:

(My Copilot menu had moved to the top, even though the copilot extension itself expressed it needed restarting.).

I disabled the Augment extension to allow Gemini to take sole control. But this is a mess that needs to be fixed by Microsoft.

Meanwhile, Google needs to place a warning on its extension just like Augment did.

As before, I’ll make real changes to my project and see how the code completion behaves. My game project uses random numbers, but I need to take them from a list so that I can generate them in place, or use a pre-rolled set of numbers for testing. As the call order that a number might be taken can change during development, I need to make sure each call takes a fixed index on the list, and in addition check that I don’t accidentally take the same number twice. However, this would be hard to manage within loops, so I return a block of numbers.

I found that Gemini made some poor completions. It tended to jump before understanding context, [website].

... private RN[] randomNumbers = new RN[MAXRN]; ... for(short i; i < MAXRN; i++) { float rnd = [website]; short converted = (short)(rnd * 100); randomNumbers[i] = new RN(converted, false); [website]$"We set number {i} to {rnd}"); //Line suggested by Gemini } 1 2 3 4 5 6 7 8 9 ... private RN [] randomNumbers = new RN [ MAXRN ]; ... for ( short i ; i < MAXRN ; i ++) { float rnd = UnityEngine . Random . value ; short converted = ( short )( rnd * 100 ); randomNumbers [ i ] = new RN ( converted , false ); TagDebug . Log ($ "We set number {i} to {rnd}" ); //Line suggested by Gemini }.

In the example above, Gemini suggested a log line that prints out the index and value, but misses out the conversion to ‘converted’ that my list directly uses. I often found that as I was updating my solution and deleting older code, Gemini would suggest putting the deleted lines back in. It makes me wonder how much Gemini has been tested during agile development, where refactoring is so significant.

(At some stage Augment turned itself back on, and uninstalling the extension made no difference. Only after quitting and opening back up did Augment fully disappear. Now, I’m very aware that my situation reviewing different assistants makes me much more open to clashing assistants, but I’m afraid that while the review should be about Gemini, I was unimpressed with VS Code’s complete inability to control clashing extensions.).

Gemini sensibly stresses “Smart Actions” using command-i, which sits between code completion and trying to write bulk code for you:

I first asked Gemini to “Explain this” on the selected method, which is where a block of random numbers is requested. It took about 10 seconds (which is quite long) to produce an output:

“The RequestNewIndexedBlock method is designed to provide a way for different parts of your game to request and ‘reserve’ a specific block of pre-generated or randomly-generated numbers from the RandomNumber system. It acts as a manager for distributing these blocks.”.

This is a very good summary. The key here is that it recognized the reservation pattern with that term “reserve,” which is not based on any cues I left in code. It also understood that “different parts of your game” point, and that the numbers may be pre-generated. It also presented a detailed “code breakdown” that was perhaps a little too detailed, if anything.

While the method and the whole class work, you can see that I should be using a ushort (unsigned short) for storeindex , as a negative index is not sensible. So I tried the second smart action “fix” to see if it proposed this:

Telling us to be cautious with generated code at this stage is a bit like telling Alice that following rabbits down holes into Wonderland might have unpredictable results!

As is the norm, it created a temporary diff file. The result suggested a superfluous check on the block, which while technically correct relied on assumptions about the internals of another class. If anything, it did make me reduce the access to the RNBlock , so that was indirectly good. Inexplicably, because the temporary file was not part of the project, Copilot tried to make suggestions! My previous remarks about how VS Code handles extensions covers this.

Finally, I let it try the final smart action “Generate unit tests” for this method. I have a separate assembly in the project with tests and a mocking library (Moq), though I’ve written none for this class — and I wasn’t sure Gemini could see these. Glancing at the code, you can see that there are two cases to test as I throw exceptions for them.

It did a good job of creating a setup and teardown, for both a pre-rolled and a generated random set. For the main happy path, the test was sensible enough:

[Test] public void RequestNewIndexedBlock_ValidIndex_ReturnsBlockAndMarksAsTaken() { // Arrange RandomNumber rng = RandomNumber.GetActiveRandomNumber(); short validIndex = 5; // Act RandomNumber.RNBlock block = rng.RequestNewIndexedBlock(validIndex); // Assert Assert.IsNotNull(block); Assert.AreEqual(validIndex, block.storeindex); [website]; } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [ Test ] public void RequestNewIndexedBlock_ValidIndex_ReturnsBlockAndMarksAsTaken () { // Arrange RandomNumber rng = RandomNumber . GetActiveRandomNumber (); short validIndex = 5 ; // Act RandomNumber . RNBlock block = rng . RequestNewIndexedBlock ( validIndex ); // Assert Assert . IsNotNull ( block ); Assert . AreEqual ( validIndex , block . storeindex ); Assert . IsTrue ( block . taken ); }.

I’ve made my concerns about VS Code’s inability to handle multiple extensions vying for the same LLM functionality perfectly clear, but Gemini Code Assist has to do improved in helping the user to disable previous extensions.

The only thing that concerns me regarding Gemini Code Assist is the speed of code completion, which at times was slightly tardy. While code is being refactored, no code assistant can ever be certain which parts of the code are no longer part of the new solution. But I generally felt that Gemini didn’t quite keep up with me — despite the fact that the code explanations were precise.

The quality of the code completions was generally ok — although in my recent tests both Copilot and Augment gave me superior results. But your mileage may vary, and I don’t doubt that scaling out enough processing time may be an issue here. Also if there’s one thing we know, it’s that LLM output only improves over time.

Can a developer successfully work with an API without a standard API documentation? My answer is as good as yours. This means that API documentation i......

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your suppo......

Manga, the rich and diverse world of Japanese comics, has captivated readers globally for decades. With the rise of digital platforms, accessing these......

Taking RWD To The Extreme

Taking RWD To The Extreme

Tomasz Jakut reflects on the evolution of web design, recalling the days when table layouts were all the rage and Flash games were shaping the online culture. And then responsive web design (RWD) happened — and it often feels like the end of history; well, at least for web design. After all, we still create responsive websites, and that’s The True Way™ of doing layouts on the web. Yet the current year, 2025, marks the 15th anniversary of Tomasz Jakut reflects on the evolution of web design, recalling the days when table layouts were all the rage and Flash games were shaping the online culture. And then(RWD) happened — and it often feels like the end of history; well, at least for web design. After all, we still create responsive websites, and that’s The True Way™ of doing layouts on the web. Yet the current year, 2025, marks the 15th anniversary of Ethan Marcotte’s article , which forever changed web development. That’s a whole era in “web” years. So, maybe something happened after RWD, but it was so obvious that it went nearly invisible. Let’s try to uncover this something.

When Ethan Marcotte conceived RWD, web technologies were far less mature than today. As web developers, we started to grasp how to do things with float s after years of stuffing everything inside table cells. There weren’t many possible ways to achieve a responsive site. There were two of them: fluid grids (based on percentages) and media queries, which were a hot new thing back then.

What was lacking was a real layout system that would allow us to lay things out on a page instead of improvising with floating content. We had to wait several years for Flexbox to appear. And CSS Grid followed that.

Undoubtedly, new layout systems native to the browser were groundbreaking 10 years ago. They were revolutionary enough to usher in a new era. In her talk “Everything You Know About Web Design Just Changed” at the An Event Apart conference in 2019, Jen Simmons proposed a name for it: Intrinsic Web Design (IWD). Let’s disarm that fancy word first. -Webster dictionary, intrinsic means “belonging to the essential nature or constitution of a thing.” In other words, IWD is a natural way of doing design for the web. And that boils down to using CSS layout systems for… laying out things. That’s it.

It does not sound that groundbreaking on its own. But it opens a lot of possibilities that weren’t earlier available with float-based layouts or table ones. We got the best things from both worlds: two-dimensional layouts (like tables with their rows and columns) with wrapping abilities (like floating content when there is not enough space for it). And there are even more goodies, like mixing fixed-sized content with fluid-sized content or intentionally overlapping elements:

See the Pen [Overlapping elements [forked]]([website]) by Comandeer. See the Pen Overlapping elements [forked] by Comandeer.

As Jen points out in her presentation, this allows us to finally make even fancy designs in the “web” way, eliminating the tension between web designers and developers. No more “This print design can’t be translated for the web!” Well, at least far fewer arguments….

But here’s the strange part: that new era didn’t come. IWD never became a household term, the same way that RWD has. We’re still stuck to the good and old RWD era. Yet, Flexbox and Grid became indispensable tools in (nearly) every web developer’s tool belt. They are so natural and intrinsic that we intuitively started to use them, missing their whole revolutionary aspect. Instead of a groundbreaking revolution of IWD, we chose a longer but steadier evolution of RWD.

I believe that IWD paved the way for more radical ideas, even if it hasn’t developed into a bonafide era. And the central point of all of those radical ideas is a browser — that part of the web that sits between our code and the user. Web developers have always had a love-hate relationship with browsers. (Don’t get me started on Internet Explorer!) They often amuse us both with new capabilities (WebGPU for the win!) and cryptic bugs (points suddenly take up more space, what the heck?). But at the end of the day, we tell the browser what to do to display our page the way we want it to be displayed to the user.

In some ways, IWD challenged that approach. CSS layout systems aren’t taking direct orders from a web developer. We can barely hint at what we want them to do. But the final decision lies with the browser. And what if we take it even further?

Heydon Pickering proposed the term algorithmic layouts to describe such an approach. The web is inherently algorithmic. Even the simplest page uses internal algorithms to lay things out: a block of text forms a flow layout that will wrap when there is not enough space in the line. And that’s so obvious that we don’t even think about it. That’s just how text works, and that’s how it has always worked. And yet, there is an algorithm behind that. That and all CSS layout systems. We can use Flexbox to make a simple layout that displays on a single line by default and falls back to wrapping up multiple lines if there is not enough space, just like text.

See the Pen [Resizable flexbox container [forked]]([website]) by Comandeer. See the Pen Resizable flexbox container [forked] by Comandeer.

And we get all of these algorithms for free! The only thing we need to do is to allow Flexbox to wrap with the flex-wrap property. And it wraps by itself. Imagine that you need to calculate when and how the layout should wrap — that would be a nightmare. Fortunately, browsers are good at laying out things. After all, they have been doing it for over 35 years. They’re experienced in that, so just let them handle this stuff. That’s the power of algorithmic layouts: they work the best when left alone.

You know only one thing: there is that peculiar thing between your website and the user called browser — and it knows much more about the page and the user than you do. It’s like an excellent translator that you hire for those extremely essential business negotiations with someone from a totally foreign culture that you don’t know anything about. But the translator knows it well and translates your words with ease, gently closing the cultural chasm between you and the potential customer. You don’t want to force them to translate your words literally — that could be catastrophic. What you want is to provide them with your message and allow them to do the magic of shaping it into a message understandable to the customer. And the same applies to browsers; they know more effective how to display your website.

I think that Jen, Heydon, and Andy speak of the same thing — an approach that shifts much of the work from the web developer to the browser. Instead of telling it how to do things, we rather tell it what to do and leave it to figure out the “how” part by itself.

As Jeremy Keith notices, there has been a shift from an imperative design (telling the browser “how”) to a declarative one (telling the browser “what”). Specifically, Jeremy says that we ought to “focus on creating the right inputs rather than trying to control every possible output.”.

That’s quite similar to what we do with AI today: we meticulously craft our prompts (inputs) and hope to get the right answer (output). However, there is a very key difference between AI and browsers: the latter is not a black box.

Everything (well, most of what) the browser does is described in detail in open web standards, so we’re able to make educated guesses about the output. Granted, we can’t be sure if the user sees the two-column layout on their 8K screen or a one-column layout on their microwave’s small screen (if it can run DOOM, it can run a web browser!). But we know for sure that we defined these two edge cases, and the browser works out everything in between.

In theory, it all sounds nice and easy. Let’s try to make the declarative design more actionable. If we gather the techniques mentioned by Jen, Heydon, Andy, and Jeremy, we will end up with roughly the following list:

They’re available in basically every browser on the market and have been for years, and I believe that they are, indeed, widely used. But from time to time, a question pops up: Which layout system should I use? And the answer is: Yes. Mix and match! After all, different elements on the page would work advanced with different layout systems. Take, for example, this navigation on top with several links in one row that should wrap if there is not enough space. Sounds like Flexbox. Is the main part divided into three columns, with the third column positioned at the bottom of the content? Definitely CSS Grid. As for the text content? Well, that’s flow.

HTML is the backbone of the web. It’s the language that structures and formats the content for the user. And it comes with a huge bonus: it loads and displays to the user, even if CSS and JavsScript fail to load for whatever reason. In other words, the website should still make sense to the user even if the CSS that provides the layout and the JavsScript that provides the interactivity are no-exhibits. A website is a text document, not so different from the one you can create in a text processor, like Word or LibreWriter.

Semantic HTML also provides essential accessibility aspects, like headings that are often used by screen-reader individuals for navigating pages. This is why starting not just with any markup but semantic markup for meaningful structure is a crucial step to embracing native web aspects.

We often need to adjust the font size of our content when the screen size changes. Smaller screens mean being able to display less content, and larger screens provide more affordance for additional content. This is why we ought to make content as fluid as possible, by which I mean the content should automatically adjust based on the screen’s size. A fluid typographic system optimizes the content’s legibility when it’s being viewed in different contexts.

Nowadays, we can achieve truly fluid type with one line of CSS, thanks to the clamp() function:

font-size: clamp(1rem, calc(1rem + [website], 6rem);

The maths involved in it goes quite above my head. Thankfully, there is a detailed article on fluid type by Adrian Bece here on Smashing Magazine and Utopia, a handy tool for doing the maths for us. But beware — there be dragons! Or at least possible accessibility issues. By limiting the maximum font size, we could break the ability to zoom the text content, violating one of the WCAG’s requirements (though there are ways to address that).

Fortunately, fluid space is much easier to grasp: if gaps (margins) between elements are defined in font-dependent units (like rem or em ), they will scale alongside the font size. Yet rest assured, there are also caveats.

Yes, that’s this over-20-year-old technique for creating web pages. And it’s still relevant today in 2025. Many interesting functions have limited availability — like cross-page view transitions. They won’t work for every user, but enabling them is as simple as adding one line of CSS:

It won’t work in some browsers, but it also won’t break anything. And if some browser catches up with the standard, the code is already there, and view transitions start to work in that browser on your website. It’s sort of like opting into the feature when it’s ready.

It applies to many more things in CSS (unsupported grid is just a flow layout, unsupported masonry layout is just a grid, and so on) and other web technologies.

Trust it because it knows much more about how safe it is for clients to surf the web. Besides, it’s a computer program, and computer programs are pretty good at calculating things. So instead of calculating all these breakpoints ourselves, take their helping hand and allow them to do it for you. Just give them some constraints. Make that element no wider than 60 characters and no narrower than 20 characters — and then relax, watching the browser make it 37 characters on some super rare viewport you’ve never encountered before. It Just Works™.

But trusting the browser also means trusting the open web. After all, these algorithms responsible for laying things out are all parts of the standards.

That’s a bonus point from me. Layout systems introduced the concept of logical CSS. Flexbox does not have a notion of a left or right side — it has a start and an end. And that way of thinking lurked into other areas of CSS, creating the whole CSS Logical Properties and Values module. After working more with layout systems, logical CSS seems much more intuitive than the old “physical” one. It also has at least one advantage over the old way of doing things: it works far advanced with internationalized content.

See the Pen [Physical vs logical CSS [forked]]([website]) by Comandeer. See the Pen Physical vs logical CSS [forked] by Comandeer.

The demo above exhibits the difference between physical and logical CSS. The physical tiles have the text-align: left property applied, while the logical ones have text-align: start . When the “left to right” inline text direction is set, both of them look the same. But when the “right to left” one is set, the logical tiles “move” their start to the right, moving the text alongside it.

Additionally, containers with tiles have their width set — the physical container with the width: 400px property and the logical one with the inline-size: 400px property. They both look the same as long as the block text direction is set to “horizontal.” But when it is set to “vertical,” the logical one switches its width with the height (as now the line of text is going from top to bottom, not from left to right), and the physical one keeps its initial width and height.

“What do you mean by taking RWD to the extreme — it’s already pretty extreme!”.

I hear you. But I believe that there’s still room for more. The changes described above are a big shift in the RWD world. But this shift is mainly technological. Fluid type without the clamp() method or algorithmic layouts without flexbox and grid couldn’t possibly exist — at least not without some horrible hacks (does anyone still remember CSS locks?). Our web development routine just caught up to what the modern browser can do. Yet, there is still another shift that could happen: a mental one.

After I applied this way of thinking to rem and em units, I entered a new world of thinking about layouts: a ratio-based one. Because there is still a myth that 1 rem roughly equals 16 pixels — except it doesn’t. It could equal any number of pixels because it all depends on what value the user sets in their browser. So, thinking in concrete numbers is, in fact, incompatible with rem and em units. The only fully compatible way is to… keep it as-is.

And I know that sounds crazy, but it forces a change in thinking about websites. If you don’t know the most basic information about your content (the font size), you can’t really apply any concrete numbers to your layout. You can only think in ratios. If the font size equals ✕ , your heading could equal 2✕ , the main column 60✕ , some text input — 10✕ , and so on. This way, everything should work out with any font size and, by extension, scale up with any font size.

We’ve already been doing that with layout systems — we allow them to work on ratios and figure out how big each part of the layout should be. And we’ve also been doing that with rem and em units for scaling things up depending on font size. The only thing left is to completely forget the “ 1rem = 16px ” equation and fully embrace the exciting shores of unknown dimensions.

But that sort of mental shift comes with one not-so-straightforward consequence. Not setting the font size and working with the user-provided one instead fully moves the power from the web developer to the browser and, effectively, the user. And the browser can provide us with far more information about user preferences.

After all, the customers know what they need best. If they set the default font size to 64 pixels, they would be grateful if we respected that value. We don’t know why they did it (maybe they have some kind of vision impairment, or maybe they simply have a screen far away from them); we only know they did it — and we respect that.

When we introduced GitHub Copilot back in 2021, we had a clear goal: to make developers’ lives easier with an AI pair programmer that helps them write......

Nowadays many of us will enjoy the cloud cluster rather than build a self managed cluster, as it’s less management, high avail......

The first release of the year is packed with attributes to make your knowledge-sharing community more effective.

As we step into 2025, we’re kicking things off......

Market Impact Analysis

Market Growth Trend

2018201920202021202220232024
7.5%9.0%9.4%10.5%11.0%11.4%11.5%
7.5%9.0%9.4%10.5%11.0%11.4%11.5% 2018201920202021202220232024

Quarterly Growth Rate

Q1 2024 Q2 2024 Q3 2024 Q4 2024
10.8% 11.1% 11.3% 11.5%
10.8% Q1 11.1% Q2 11.3% Q3 11.5% Q4

Market Segments and Growth Drivers

Segment Market Share Growth Rate
Enterprise Software38%10.8%
Cloud Services31%17.5%
Developer Tools14%9.3%
Security Software12%13.2%
Other Software5%7.5%
Enterprise Software38.0%Cloud Services31.0%Developer Tools14.0%Security Software12.0%Other Software5.0%

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity:

Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity AI/ML Blockchain VR/AR Cloud Mobile

Competitive Landscape Analysis

Company Market Share
Microsoft22.6%
Oracle14.8%
SAP12.5%
Salesforce9.7%
Adobe8.3%

Future Outlook and Predictions

The Code Neovim Future landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:

Year-by-Year Technology Evolution

Based on current trajectory and expert analyses, we can project the following development timeline:

2024Early adopters begin implementing specialized solutions with measurable results
2025Industry standards emerging to facilitate broader adoption and integration
2026Mainstream adoption begins as technical barriers are addressed
2027Integration with adjacent technologies creates new capabilities
2028Business models transform as capabilities mature
2029Technology becomes embedded in core infrastructure and processes
2030New paradigms emerge as the technology reaches full maturity

Technology Maturity Curve

Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:

Time / Development Stage Adoption / Maturity Innovation Early Adoption Growth Maturity Decline/Legacy Emerging Tech Current Focus Established Tech Mature Solutions (Interactive diagram available in full report)

Innovation Trigger

  • Generative AI for specialized domains
  • Blockchain for supply chain verification

Peak of Inflated Expectations

  • Digital twins for business processes
  • Quantum-resistant cryptography

Trough of Disillusionment

  • Consumer AR/VR applications
  • General-purpose blockchain

Slope of Enlightenment

  • AI-driven analytics
  • Edge computing

Plateau of Productivity

  • Cloud infrastructure
  • Mobile applications

Technology Evolution Timeline

1-2 Years
  • Technology adoption accelerating across industries
  • digital transformation initiatives becoming mainstream
3-5 Years
  • Significant transformation of business processes through advanced technologies
  • new digital business models emerging
5+ Years
  • Fundamental shifts in how technology integrates with business and society
  • emergence of new technology paradigms

Expert Perspectives

Leading experts in the software dev sector provide diverse perspectives on how the landscape will evolve over the coming years:

"Technology transformation will continue to accelerate, creating both challenges and opportunities."

— Industry Expert

"Organizations must balance innovation with practical implementation to achieve meaningful results."

— Technology Analyst

"The most successful adopters will focus on business outcomes rather than technology for its own sake."

— Research Director

Areas of Expert Consensus

  • Acceleration of Innovation: The pace of technological evolution will continue to increase
  • Practical Integration: Focus will shift from proof-of-concept to operational deployment
  • Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
  • Regulatory Influence: Regulatory frameworks will increasingly shape technology development

Short-Term Outlook (1-2 Years)

In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing software dev challenges:

  • Technology adoption accelerating across industries
  • digital transformation initiatives becoming mainstream

These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.

Mid-Term Outlook (3-5 Years)

As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:

  • Significant transformation of business processes through advanced technologies
  • new digital business models emerging

This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.

Long-Term Outlook (5+ Years)

Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:

  • Fundamental shifts in how technology integrates with business and society
  • emergence of new technology paradigms

These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.

Key Risk Factors and Uncertainties

Several critical factors could significantly impact the trajectory of software dev evolution:

Technical debt accumulation
Security integration challenges
Maintaining code quality

Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.

Alternative Future Scenarios

The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:

Optimistic Scenario

Rapid adoption of advanced technologies with significant business impact

Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.

Probability: 25-30%

Base Case Scenario

Measured implementation with incremental improvements

Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.

Probability: 50-60%

Conservative Scenario

Technical and organizational barriers limiting effective adoption

Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.

Probability: 15-20%

Scenario Comparison Matrix

FactorOptimisticBase CaseConservative
Implementation TimelineAcceleratedSteadyDelayed
Market AdoptionWidespreadSelectiveLimited
Technology EvolutionRapidProgressiveIncremental
Regulatory EnvironmentSupportiveBalancedRestrictive
Business ImpactTransformativeSignificantModest

Transformational Impact

Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.

The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.

Implementation Challenges

Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.

Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.

Key Innovations to Watch

Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.

Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.

Technical Glossary

Key technical terms and definitions to help understand the technologies discussed in this article.

Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.

Filter by difficulty:

interface intermediate

algorithm Well-designed interfaces abstract underlying complexity while providing clearly defined methods for interaction between different system components.

platform intermediate

interface Platforms provide standardized environments that reduce development complexity and enable ecosystem growth through shared functionality and integration capabilities.

API beginner

platform APIs serve as the connective tissue in modern software architectures, enabling different applications and services to communicate and share data according to defined protocols and data formats.
API concept visualizationHow APIs enable communication between different software systems
Example: Cloud service providers like AWS, Google Cloud, and Azure offer extensive APIs that allow organizations to programmatically provision and manage infrastructure and services.

algorithm intermediate

encryption

agile intermediate

API