How To Run DeepSeek R1 on AWS Using Infrastructure as Code - Related to how, run, using, status, r1
Baseline Status in a WordPress Block

You know about Baseline, right? And you may have heard that the Chrome team made a web component for it.
Of course, we could simply drop the HTML component into the page. But I never know where we’re going to use something like this. The Almanac, obs. But I’m sure there are times where embedded it in other pages and posts makes sense.
That’s exactly what WordPress blocks are good for. We can take an already reusable component and make it repeatable when working in the WordPress editor. So that’s what I did! That component you see up there is the web component formatted as a WordPress block. Let’s drop another one in just for kicks.
Pretty neat! I saw that Pawel Grzybek made an equivalent for Hugo. There’s an Astro equivalent, too. Because I’m fairly green with WordPress block development I thought I’d write a bit up on how it’s put together. There are still rough edges that I’d like to smooth out later, but this is a good enough point to share the basic idea.
I used the @wordpress/create-block package to bootstrap and initialize the project. All that means is I cd ‘d into the /wp-content/plugins directory from the command line and ran the install command to plop it all in there.
The command prompts you through the setup process to name the project and all that.
The [website] file is where the plugin is registered. And yes, it’s looks completely the same as it’s been for years, just not in a [website] file like it is for themes. The difference is that the create-block package does some lifting to register the widget so I don’t have to:
The create-block package also did some filling of the blanks in the block-json file based on the onboarding process:
{ "$schema": "[website]", "apiVersion": 2, "name": "css-tricks/baseline-status", "version": "[website]", "title": "Baseline Status", "category": "widgets", "icon": "chart-pie", "description": "Displays current Baseline availability for web platform attributes.", "example": {}, "supports": { "html": false }, "textdomain": "baseline-status", "editorScript": "file:./[website]", "editorStyle": "file:./[website]", "style": "file:./[website]", "render": "file:./[website]", "viewScript": "file:./[website]" }.
Going off some tutorials -Tricks, I knew that WordPress blocks render twice — once on the front end and once on the back end — and there’s a file for each one in the src folder:
Handles the front-end view [website] : Handles the back-end view.
Cool. I started with the web component’s markup:
I’d hate to inject that '; } return $tag; } add_filter( 'script_loader_tag', 'csstricks_add_type_attribute', 10, 3 ); // Enqueues the scripts and styles for the back end function csstricks_enqueue_block_editor_assets() { // Enqueues the scripts wp_enqueue_script( 'baseline-status-widget-block', plugins_url( '[website]', __FILE__ ), array( 'wp-blocks', 'wp-element', 'wp-editor' ), false, ); // Enqueues the styles wp_enqueue_style( 'baseline-status-widget-block-editor', plugins_url( '[website]', __FILE__ ), array( 'wp-edit-blocks' ), false, ); } add_action( 'enqueue_block_editor_assets', 'csstricks_enqueue_block_editor_assets' );
The final result bakes the script directly into the plugin so that it adheres to the WordPress Plugin Directory guidelines. If that wasn’t the case, I’d probably keep the hosted script intact because I’m completely uninterested in maintaining it. Oh, and that csstricks_add_type_attribute() function is to help import the file as an ES module. There’s a wp_enqueue_script_module() action available to hook into that should handle that, but I couldn’t get it to do the trick.
With that in hand, I can put the component’s markup into a template. The [website] file is where all the front-end goodness resides, so that’s where I dropped the markup:
That get_block_wrapper_attibutes() thing is recommended by the WordPress docs as a way to output all of a block’s information for debugging things, such as which aspects it ought to support.
[FEATURE] is a placeholder that will eventually tell the component which web platform to render information about. We may as well work on that now. I can register attributes for the component in [website] :
"attributes": { "showBaselineStatus": { "featureID": { "type": "string" } },.
Now we can modification the markup in [website] to echo the featureID when it’s been established.
There will be more edits to that markup a little later. But first, I need to put the markup in the [website] file so that the component renders in the WordPress editor when adding it to the page.
useBlockProps is the JavaScript equivalent of get_block_wrapper_attibutes() and can be good for debugging on the back end.
At this point, the block is fully rendered on the page when dropped in! The problems are:
It’s not passing in the feature I want to display.
I’ll work on the latter first. That way, I can simply plug the right variable in there once everything’s been hooked up.
One of the nicer aspects of WordPress DX is that we have direct access to the same controls that WordPress uses for its own blocks. We import them and extend them where needed.
I started by importing the stuff in [website] :
import { InspectorControls, useBlockProps } from '@wordpress/block-editor'; import { PanelBody, TextControl } from '@wordpress/components'; import './[website]';
InspectorControls are good for debugging.
are good for debugging. useBlockProps are what can be debugged.
are what can be debugged. PanelBody is the main wrapper for the block settings.
is the main wrapper for the block settings. TextControl is the field I want to pass into the markup where [FEATURE] currently is.
is the field I want to pass into the markup where currently is. [website] provides styles for the controls.
Before I get to the controls, there’s an Edit function needed to use as a wrapper for all the work:
export default function Edit( { attributes, setAttributes } ) { // Controls }.
First is InspectorTools and the PanelBody :
export default function Edit( { attributes, setAttributes } ) { // React components need a parent element <> // Controls }.
Then it’s time for the actual text input control. I really had to lean on this introductory tutorial on block development for the following code, notably this section.
export default function Edit( { attributes, setAttributes } ) { <> // Controls setAttributes( { featureID: value } ) } /> }.
Oh yeah! Can’t forget to define the featureID variable because that’s what populates in the component’s markup. Back in [website] :
In short: The feature’s ID is what constitutes the block’s attributes. Now I need to register that attribute so the block recognizes it. Back in [website] in a new section:
"attributes": { "featureID": { "type": "string" } },.
Pretty straightforward, I think. Just a single text field that’s a string. It’s at this time that I can finally wire it up to the front-end markup in [website] :
I struggled with this more than I care to admit. I’ve dabbled with styling the Shadow DOM but only academically, so to speak. This is the first time I’ve attempted to style a web component with Shadow DOM parts on something being used in production.
If you’re new to Shadow DOM, the basic idea is that it prevents styles and scripts from “leaking” in or out of the component. This is a big selling point of web components because it’s so darn easy to drop them into any project and have them “just” work.
But how do you style a third-party web component? It depends on how the developer sets things up because there are ways to allow styles to “pierce” through the Shadow DOM. Ollie Williams wrote “Styling in the Shadow DOM With CSS Shadow Parts” for us a while back and it was super helpful in pointing me in the right direction. Chris has one, too.
First off, I knew I could select the element directly without any classes, IDs, or other attributes:
I peeked at the script’s source code to see what I was working with. I had a few light styles I could use right away on the type selector:
baseline-status { background: #000; border: solid 5px #f8a100; border-radius: 8px; color: #fff; display: block; margin-block-end: [website]; padding: .5em; }.
I noticed a CSS color variable in the source code that I could use in place of hard-coded values, so I redefined them and set them where needed:
baseline-status { --color-text: #fff; --color-outline: var(--orange); border: solid 5px var(--color-outline); border-radius: 8px; color: var(--color-text); display: block; margin-block-end: var(--gap); padding: calc(var(--gap) / 4); }.
Now for a tricky part. The component’s markup looks close to this in the DOM when fully rendered:
Anchor positioning Limited availability This feature is not Baseline because it does not work in some of the most widely-used [website] more.
This feature is not Baseline because it does not work in some of the most widely-used browsers.
I wanted to play with the idea of hiding the element in some contexts but thought twice about it because not displaying the title only really works for Almanac content when you’re on the page for the same feature as what’s rendered in the component. Any other context and the heading is a “need” for providing context as far as what feature we’re looking at. Maybe that can be a future enhancement where the heading can be toggled on and off.
This is freely available in the WordPress Plugin Directory as of today! This is my very first plugin I’ve submitted to WordPress on my own behalf, so this is really exciting for me!
This is far from fully baked but definitely gets the job done for now. In the future it’d be nice if this thing could do a few more things:
We're a place where coders share, stay up-to-date and grow their careers....
Integration testing is a pivotal phase in the software development lifecycle.
Its primary goal is to validate that di......
The counter mode of operation (CTR) is defined in the NIST Special Publication 800-38A, pp. 15-16. I'd summarize it like this: The CTR mode transforms......
How To Run DeepSeek R1 on AWS Using Infrastructure as Code

This weekend, I changed my perspective on open source AI deployment. While scrolling through my social feeds, I noticed many posts about DeepSeek, a new open-source language model, causing a stir in the AI community. As someone who regularly deploys infrastructure for production environments, I was intrigued by DeepSeek’s promise of competitive performance at a fraction of the cost of major commercial models.
What caught my attention wasn’t just the benchmark numbers. However, DeepSeek’s [website] score on AIME 2024 mathematics tests is impressive, but rather the practical possibility of running these models on standard cloud infrastructure. I decided to put this to the test by deploying DeepSeek on AWS using Pulumi for infrastructure as code. Here’s what I learned from the experience.
Understanding DeepSeek’s Place in the AI Landscape.
DeepSeek emerged from a Chinese AI startup founded in 2023. It brings something unique: High-performance language models released under the MIT license. While companies like OpenAI and Meta spend enormous resources on their models, DeepSeek achieves comparable results with significantly less investment.
In my testing, DeepSeek R1 demonstrated capabilities that make it particularly valuable for practical applications:
Mathematics processing with [website] accuracy on AIME 2024 tests.
Software engineering tasks with [website] accuracy on SWE-bench Verified.
General knowledge handling with a [website] score on MMLU.
What makes this especially interesting for development teams is the availability of distilled versions with [website] to 70B parameters, allowing deployment on various hardware configurations, from local machines to cloud instances.
Deploying DeepSeek: A Practical Infrastructure Approach.
After evaluating DeepSeek’s capabilities, I created a reproducible deployment process using Pulumi and AWS. The goal was to establish a GPU-powered environment that could efficiently handle the model while remaining cost-effective.
The deployment architecture I developed consists of three main components:
A GPU-enabled EC2 instance ([website] for model hosting. Ollama for model management and API compatibility. Open WebUI for interaction and testing.
Here’s the real-world deployment process I developed, focusing on maintainability and scalability:
Before embarking on our self-hosted DeepSeek model journey, ensure you have:
A basic understanding of Ollama, a tool that simplifies running large language models (LLMs) on your hardware.
To get started, I created a new Pulumi project:
pulumi new aws-typescript 1 pulumi new aws - typescript.
I chose TypeScript for this example, but you can select any language you prefer.
After setting up the project, I deleted the sample code and replaced it with the following configurations.
To download the NVIDIA drivers, I needed to create an instance role with S3 access (AmazonS3ReadOnlyAccess is enough here).
import * as pulumi from "@pulumi/pulumi"; import * as aws from "@pulumi/aws"; import * as fs from "fs"; const role = new [website]"deepSeekRole", { name: "deepseek-role", assumeRolePolicy: JSON.stringify({ Version: "2012-10-17", Statement: [ { Action: "sts:AssumeRole", Effect: "Allow", Principal: { Service: "[website]", }, }, ], }), }); new [website]"deepSeekS3Policy", { policyArn: "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess", role: [website], }); const instanceProfile = new [website]"deepSeekProfile", { name: "deepseek-profile", role: [website], }); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 import * as pulumi from "@pulumi/pulumi" ; import * as aws from "@pulumi/aws" ; import * as fs from "fs" ; const role = new aws . < em > iam < / em > . Role ( "deepSeekRole" , { name : "deepseek-role" , assumeRolePolicy : < em > JSON < / em > . stringify ({ Version : "2012-10-17" , Statement : [ { Action : "sts:AssumeRole" , Effect : "Allow" , Principal : { Service : "[website]" , }, }, ], }), }); new aws . < em > iam < / em > . RolePolicyAttachment ( "deepSeekS3Policy" , { policyArn : "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess" , role : role . name , }); const instanceProfile = new aws . < em > iam < / em > . InstanceProfile ( "deepSeekProfile" , { name : "deepseek-profile" , role : role . name , });
Next, I needed to create a VPC, subnet, Internet Gateway, and route table. This is all done with the following code snippet:
const vpc = new [website]"deepSeekVpc", { cidrBlock: "[website]", enableDnsHostnames: true, enableDnsSupport: true, }); const subnet = new [website]"deepSeekSubnet", { vpcId: [website], cidrBlock: "[website]", availabilityZone: pulumi.interpolate`${aws.getAvailabilityZones().then(it => [website][0])}`, mapPublicIpOnLaunch: true, }); const internetGateway = new [website]"deepSeekInternetGateway", { vpcId: [website], }); const routeTable = new [website]"deepSeekRouteTable", { vpcId: [website], routes: [ { cidrBlock: "[website]", gatewayId: [website], }, ], }); const routeTableAssociation = new [website]"deepSeekRouteTableAssociation", { subnetId: [website], routeTableId: [website], }); const securityGroup = new [website]"deepSeekSecurityGroup", { vpcId: [website], egress: [ { fromPort: 0, toPort: 0, protocol: "-1", cidrBlocks: ["[website]"], }, ], ingress: [ { fromPort: 22, toPort: 22, protocol: "tcp", cidrBlocks: ["[website]"], }, { fromPort: 3000, toPort: 3000, protocol: "tcp", cidrBlocks: ["[website]"], }, { fromPort: 11434, toPort: 11434, protocol: "tcp", cidrBlocks: ["[website]"], }, ], }); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 const vpc = new aws . < em > ec2 < / em > . Vpc ( "deepSeekVpc" , { cidrBlock : "[website]" , enableDnsHostnames : true , enableDnsSupport : true , }); const subnet = new aws . < em > ec2 < / em > . Subnet ( "deepSeekSubnet" , { vpcId : vpc . id , cidrBlock : "[website]" , availabilityZone : pulumi . interpolate `${ aws . < em > getAvailabilityZones < / em > (). then ( it = > it . names [ 0 ])}`, mapPublicIpOnLaunch : true , }); const internetGateway = new aws . < em > ec2 < / em > . InternetGateway ( "deepSeekInternetGateway" , { vpcId : vpc . id , }); const routeTable = new aws . < em > ec2 < / em > . RouteTable ( "deepSeekRouteTable" , { vpcId : vpc . id , routes : [ { cidrBlock : "[website]" , gatewayId : internetGateway . id , }, ], }); const routeTableAssociation = new aws . < em > ec2 < / em > . RouteTableAssociation ( "deepSeekRouteTableAssociation" , { subnetId : subnet . id , routeTableId : routeTable . id , }); const securityGroup = new aws . < em > ec2 < / em > . SecurityGroup ( "deepSeekSecurityGroup" , { vpcId : vpc . id , egress : [ { fromPort : 0 , toPort : 0 , protocol : "-1" , cidrBlocks : [ "[website]" ], }, ], ingress : [ { fromPort : 22 , toPort : 22 , protocol : "tcp" , cidrBlocks : [ "[website]" ], }, { fromPort : 3000 , toPort : 3000 , protocol : "tcp" , cidrBlocks : [ "[website]" ], }, { fromPort : 11434 , toPort : 11434 , protocol : "tcp" , cidrBlocks : [ "[website]" ], }, ], });
Finally, I can create the EC2 instance. For this, I need to create a SSH key pair and retrieve the Amazon Machine Images to use in our instances.
I also use a [website] , but you can change the instance type to any other instance type that supports GPU. You can find more information about the instance types.
If you need to create the key pair, run the following command:
openssl genrsa -out [website] 2048 openssl rsa -in [website] -pubout > [website] ssh-keygen -f [website] -i -mPKCS8 > [website] 1 2 3 openssl genrsa - out deepseek . pem 2048 openssl rsa - in deepseek . pem - pubout > deepseek . pub ssh - keygen - f mykey . pub - i - mPKCS8 > deepseek . pem.
const keyPair = new [website]"deepSeekKey", { publicKey: [website]"[website]", "utf-8")), }); const deepSeekAmi = [website] .getAmi({ filters: [ { name: "name", values: ["[website]*-x86_64-gp2"], }, { name: "architecture", values: ["x86_64"], }, ], owners: ["137112412989"], // Amazon mostRecent: true, }) .then(ami => [website]; const deepSeekInstance = new [website]"deepSeekInstance", { ami: deepSeekAmi, instanceType: "[website]", keyName: keyPair.keyName, rootBlockDevice: { volumeSize: 100, volumeType: "gp3", }, subnetId: [website], vpcSecurityGroupIds: [[website]], iamInstanceProfile: [website], userData: fs.readFileSync("[website]", "utf-8"), tags: { Name: "deepSeek-server", }, }); export const amiId = deepSeekAmi; export const instanceId = [website]; export const instancePublicIp = deepSeekInstance.publicIp; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 const keyPair = new aws . < em > ec2 < / em > . KeyPair ( "deepSeekKey" , { publicKey : pulumi . output ( fs . readFileSync ( "[website]" , "utf-8" )), }); const deepSeekAmi = aws . < em > ec2 < / em > < em > < / em > . < em > getAmi < / em > ({ filters : [ { name : "name" , values : [ "[website]*-x86_64-gp2" ], }, { name : "architecture" , values : [ "x86_64" ], }, ], owners : [ "137112412989" ], // Amazon mostRecent : true , }) . then ( ami = > ami . id ); const deepSeekInstance = new aws . < em > ec2 < / em > . Instance ( "deepSeekInstance" , { ami : deepSeekAmi , instanceType : "[website]" , keyName : keyPair . keyName , rootBlockDevice : { volumeSize : 100 , volumeType : "gp3" , }, subnetId : subnet . id , vpcSecurityGroupIds : [ securityGroup . id ], iamInstanceProfile : instanceProfile . name , userData : fs . readFileSync ( "[website]" , "utf-8" ), tags : { Name : "deepSeek-server" , }, }); export const < em > amiId < / em > = deepSeekAmi ; export const < em > instanceId < / em > = deepSeekInstance . id ; export const < em > instancePublicIp < / em > = deepSeekInstance . publicIp ;
Then, we configure the GPU instance with proper drivers and dependencies, install Ollama and run DeepSeek with this cloud config.
#cloud-config individuals: - default package_update: true packages: - apt-transport-https - ca-certificates - curl - openjdk-17-jre-headless - gcc runcmd: - yum install -y gcc kernel-devel-$(uname -r) - aws s3 cp --recursive s3://ec2-linux-nvidia-drivers/latest/ . - chmod +x NVIDIA-Linux-x86_64*.run - /bin/sh ./NVIDIA-Linux-x86_64*.run --tmpdir . --silent - touch /etc/[website] - echo "options nvidia NVreg_EnableGpuFirmware=0" | sudo tee --append /etc/[website] - yum install -y docker - usermod -a -G docker ec2-user - systemctl enable docker.service - systemctl start docker.service - curl -s -L [website] | sudo tee /etc/[website] - yum install -y nvidia-container-toolkit - nvidia-ctk runtime configure --runtime=docker - systemctl restart docker - docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama --restart always ollama/ollama - sleep 120 - docker exec ollama ollama run deepseek-r1:7b - docker exec ollama ollama run deepseek-r1:14b - docker run -d -p 3000:8080 [website] -v open-webui:/app/backend/data --name open-webui --restart always [website] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 #cloud-config individuals : - default package_update : true packages : - apt - transport - https - ca - certificates - curl - openjdk - 17 - jre - headless - gcc runcmd : - yum install - y gcc kernel - devel -$( uname - r ) - aws s3 cp -- recursive s3 :// ec2 - linux - nvidia - drivers / latest / . - chmod + x NVIDIA - Linux - x86_64 *. run - / bin / sh ./ NVIDIA - Linux - x86_64 *. run -- tmpdir . -- silent - touch / etc / modprobe . d / nvidia . conf - echo "options nvidia NVreg_EnableGpuFirmware=0" | sudo tee -- append / etc / modprobe . d / nvidia . conf - yum install - y docker - usermod - a - G docker ec2 - user - systemctl enable docker . service - systemctl start docker . service - curl - s - L https :// nvidia . github . io / libnvidia - container / stable / rpm / nvidia - container - toolkit . repo | sudo tee / etc / yum . repos . d / nvidia - container - toolkit . repo - yum install - y nvidia - container - toolkit - nvidia - ctk runtime configure -- runtime = docker - systemctl restart docker - docker run - d -- gpus = all - v ollama :/ root /. ollama - p 11434 : 11434 -- name ollama -- restart always ollama / ollama - sleep 120 - docker exec ollama ollama run deepseek - r1 : 7b - docker exec ollama ollama run deepseek - r1 : 14b - docker run - d - p 3000 : 8080 -- add - host = host . docker . internal : host - gateway - v open - webui :/ app / backend / data -- name open - webui -- restart always ghcr . io / open - webui / open - webui : main.
With all configurations in place, we can deploy the infrastructure using:
This command provides a preview of the changes, allowing you to confirm before proceeding. Once confirmed, Pulumi creates the resources, and after some time, the EC2 instance is ready with DeepSeek R1 running.
To retrieve the public IP address of our EC2 instance, I used the following command to instruct Pulumi to print out the configuration:
pulumi stack output instancePublicIp 1 2 pulumi stack output instancePublicIp < yout - ip >.
I then opened web UI with this address: http://:3000/.
Head to the dropdown in the upper right corner and select the model you want to use.
I selected deepseek-r1:14b to test my model.
Finally, I used the central chat box to begin using the model. My example prompt is: What are Pulumi’s benefits?
After you are done experimenting with DeepSeek, I clean up the resources by running the following command:
DeepSeek represents a significant step forward in accessible AI deployment. The combination of MIT licensing and competitive performance could make it a viable option for production environments.
For teams considering DeepSeek deployment, I recommend:
Starting with the 7B model for a balanced performance/resource ratio.
Using infrastructure as code (like Pulumi) for reproducible deployments.
Implementing proper monitoring and scaling policies.
Testing thoroughly with production-like workloads before deployment.
My GitHub repository contains the code and configuration files from this deployment, allowing others to build upon this foundation for their AI infrastructure needs.
This experience has shown me that enterprise-grade AI deployment is increasingly accessible to smaller teams. As we continue to see advances in model efficiency and deployment tools, the barrier to entry for production AI will continue to lower, opening new possibilities for innovation across the industry.
If you’re interested in exploring AI models or need a robust setup for your projects, consider trying DeepSeek with Pulumi. Remember, while the setup is straightforward, securing your instance and understanding the model’s capabilities are crucial steps before going live.
InfoQ is introducing its first hands-on software architecture certification at QCon London 2025 (April 7-11), the international software development c......
Cummins: I'm Holly Cummins. I work for Red Hat. I'm one of the engineers who's helping to build Quarkus. Just as a level set before I star......
AI-Driven Automation: AI will automate API lifecycle management, enhancing performance and security. API-First Development: Designing A......
Build a Notion-like editor with Rails

Notion had for a long-time a neat block-based editor. It allows you to type away with a paragraph-element by default, but allows you to choose other block elements, like h1, ul and so on. This allows you to style the elements inline as well, keeping things super clear.
This article reveals you the basic data modeling and logic to set up. Enhancing it with JavaScript (Stimulus) and making things pretty will happen in a following article.
I am keeping this set up basic, with just a page model to hold the blocks, but in a real application a page might belong to something like a Collection. I am also not adding all possible blocks, but feel free to reach out if you need specific guidance.
rails g model Page Enter fullscreen mode Exit fullscreen mode.
# Let's add the association already into app/models/[website] class Page < ApplicationRecord has_many :blocks , dependent: :destroy end Enter fullscreen mode Exit fullscreen mode.
Simple enough. For the Blocks I will be using DelegatedType. This allows you to have shared attribute in one Block model, while having specific Block attributes in their own. It works perfect for the editor.
rails g model Block page:belongs_to blockable:belongs_to { polymorphic } Enter fullscreen mode Exit fullscreen mode.
rails generate model Block::Text content:text rails generate model Block::Heading level:integer content:string Enter fullscreen mode Exit fullscreen mode.
Let's run rails db:migrate and make some changes to the the model files:
# app/models/[website] class Block < ApplicationRecord belongs_to :page delegated_type :blockable , types: %w[Block::Text Block::Heading] end # app/models/block/[website] class Block::Text < ApplicationRecord has_one :block , as: :blockable , dependent: :destroy end # app/models/block/[website] class Block::Heading < ApplicationRecord has_one :block , as: :blockable , dependent: :destroy validates :level , inclusion: { in: 1 .. 6 } end Enter fullscreen mode Exit fullscreen mode.
Let's also seed the database so we have some semi-real data to look at:
# db/[website] page = Page . create blocks = [ { type: "Block::Heading" , attributes: { level: 1 , content: "Welcome to Rails Wonderland" } }, { type: "Block::Text" , attributes: { content: "Once upon a time, in a land full of gems, there was a brave developer named Ruby." } }, { type: "Block::Heading" , attributes: { level: 2 , content: "The Quest for the Perfect Gem" } }, { type: "Block::Text" , attributes: { content: "Ruby embarked on a quest to find the perfect gem, one that would solve all N+1 queries." } }, { type: "Block::Heading" , attributes: { level: 3 , content: "Enter the Realm of Active Record" } }, { type: "Block::Text" , attributes: { content: "In the mystical realm of Active Record, Ruby learned the ancient art of associations." } }, { type: "Block::Heading" , attributes: { level: 3 , content: "The Trials of Migration" } }, { type: "Block::Text" , attributes: { content: "With every migration, Ruby grew stronger, mastering the power of schema changes." } }, { type: "Block::Text" , attributes: { content: "And thus, the legend of Ruby and the Rails was born, inspiring developers across the world." } } ] blocks . each do | block_data | blockable = block_data [ :type ]. constantize . create ( block_data [ :attributes ]) Block . create ( page: page , blockable: blockable ) end Enter fullscreen mode Exit fullscreen mode.
And finally a basic route, controller + view and partials:
# config/[website] Rails . application . routes . draw do resources :pages , only: %w[show] end # app/controllers/[website] class PagesController < ApplicationController def show @blocks = Page . find ( params [ :id ]). blocks end end Enter fullscreen mode Exit fullscreen mode.
# app/views/pages/[website] <%= render @blocks %> # app/views/blocks/[website] <%= render "blocks/blockable/ #{ block . blockable_name } " , block: block %> # app/views/blocks/blockable/[website] <%= content_tag "h #{ block . block_heading . level } " , block . block_heading . content %> # app/views/blocks/blockable/[website] <%= content_tag :p , block . block_text . content %> Enter fullscreen mode Exit fullscreen mode.
Wow, that was a lot. But if you navigate to [website]:3000/pages/1 you should see the rendering of your first block-based page. Yay! 🎉.
With the basic modeling in place and being able to render the page's blocks, let's make the page editable. I like to start with the most basic version and then enhance using JavaScript.
# config/[website] Rails . application . routes . draw do resources :pages , only: %w[show edit] do resources :blocks , module: :pages , only: %w[create update] end end # app/controllers/[website] class PagesController < ApplicationController before_action :set_page , only: %w[show edit] def show @blocks = @page . blocks end def edit end private def set_page @page = Page . find ( params [ :id ]) end end Enter fullscreen mode Exit fullscreen mode.
# app/views/pages/[website] <%= render partial: "pages/block" , collection: @page . blocks %> # app/views/pages/[website] <%= form_with model: [ block . page , block ] do | form | %> <%= form . fields_for :blockable do | blockable_form | %> <%= render "blocks/editable/ #{ block . blockable_type . underscore } " , form: blockable_form %> <% end %> <%= form . submit %> <% end %> Enter fullscreen mode Exit fullscreen mode.
Because each block has its own “blockable” which in turn can have different fields for each, let's create a different partial for each:
# app/views/blocks/editable/block/[website] <%= form . text_field :level %> <%= form . text_area :content %> # app/views/blocks/editable/block/[website] <%= form . text_area :content %> Enter fullscreen mode Exit fullscreen mode.
Let's continue by also allowing to actually revision ánd create new blocks as well:
# app/controllers/pages/[website] class Pages::BlocksController < ApplicationController before_action :set_page , only: %w[create update] def create @page . blocks . create! ( blockable: params [ :blockable_type ]. constantize . new ( new_block_params ) ) redirect_to edit_page_path ( @page ) end def update Block . find ( params [ :id ]). update ( existing_block_params ) redirect_to edit_page_path ( @page ) end private def set_page @page = Page . find ( params [ :page_id ]) end def new_block_params params . permit ( blockable_attributes: [ :level ])[ :blockable_attributes ]. to_h . compact_blank end def existing_block_params params . require ( :block ). permit ( :id , blockable_attributes: [ :id , :level , :content ]) end end Enter fullscreen mode Exit fullscreen mode.
Now let's add a few buttons to create a new blockr enhancement the pages#edit page:
<%= render partial: "pages/block" , collection: @page . blocks %> <%= button_to "Add paragraph" , page_blocks_path ( @page ), params: { blockable_type: "Block::Text" } %> <%= button_to "Add h1" , page_blocks_path ( @page ), params: { blockable_type: "Block::Heading" , blockable_attributes: { level: 1 } } %> <%= button_to "Add h2" , page_blocks_path ( @page ), params: { blockable_type: "Block::Heading" , blockable_attributes: { level: 2 } } %> <%# etc %> Enter fullscreen mode Exit fullscreen mode.
Now all the basics are in place. New blocks can be created and existing blocks can be updated. In an upcoming article I want to expand on this foundation and add a small Stimulus controller to improve the UX and also include Tailwind CSS to make things look a fair bit improved.
The first release of the year is packed with functions to make your knowledge-sharing community advanced.
As we step into 2025, we’re kicking things off......
Have you ever stumbled upon something new and went to research it just to find that there is little-to-no information about it? It’s a mixed feeling: ......
Open source has long driven innovation and the adoption of cutting-edge technologies, from web interfaces to cloud-native computing. The same is true ......
Market Impact Analysis
Market Growth Trend
2018 | 2019 | 2020 | 2021 | 2022 | 2023 | 2024 |
---|---|---|---|---|---|---|
7.5% | 9.0% | 9.4% | 10.5% | 11.0% | 11.4% | 11.5% |
Quarterly Growth Rate
Q1 2024 | Q2 2024 | Q3 2024 | Q4 2024 |
---|---|---|---|
10.8% | 11.1% | 11.3% | 11.5% |
Market Segments and Growth Drivers
Segment | Market Share | Growth Rate |
---|---|---|
Enterprise Software | 38% | 10.8% |
Cloud Services | 31% | 17.5% |
Developer Tools | 14% | 9.3% |
Security Software | 12% | 13.2% |
Other Software | 5% | 7.5% |
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity:
Competitive Landscape Analysis
Company | Market Share |
---|---|
Microsoft | 22.6% |
Oracle | 14.8% |
SAP | 12.5% |
Salesforce | 9.7% |
Adobe | 8.3% |
Future Outlook and Predictions
The Baseline Status Wordpress landscape is evolving rapidly, driven by technological advancements, changing threat vectors, and shifting business requirements. Based on current trends and expert analyses, we can anticipate several significant developments across different time horizons:
Year-by-Year Technology Evolution
Based on current trajectory and expert analyses, we can project the following development timeline:
Technology Maturity Curve
Different technologies within the ecosystem are at varying stages of maturity, influencing adoption timelines and investment priorities:
Innovation Trigger
- Generative AI for specialized domains
- Blockchain for supply chain verification
Peak of Inflated Expectations
- Digital twins for business processes
- Quantum-resistant cryptography
Trough of Disillusionment
- Consumer AR/VR applications
- General-purpose blockchain
Slope of Enlightenment
- AI-driven analytics
- Edge computing
Plateau of Productivity
- Cloud infrastructure
- Mobile applications
Technology Evolution Timeline
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
Expert Perspectives
Leading experts in the software dev sector provide diverse perspectives on how the landscape will evolve over the coming years:
"Technology transformation will continue to accelerate, creating both challenges and opportunities."
— Industry Expert
"Organizations must balance innovation with practical implementation to achieve meaningful results."
— Technology Analyst
"The most successful adopters will focus on business outcomes rather than technology for its own sake."
— Research Director
Areas of Expert Consensus
- Acceleration of Innovation: The pace of technological evolution will continue to increase
- Practical Integration: Focus will shift from proof-of-concept to operational deployment
- Human-Technology Partnership: Most effective implementations will optimize human-machine collaboration
- Regulatory Influence: Regulatory frameworks will increasingly shape technology development
Short-Term Outlook (1-2 Years)
In the immediate future, organizations will focus on implementing and optimizing currently available technologies to address pressing software dev challenges:
- Technology adoption accelerating across industries
- digital transformation initiatives becoming mainstream
These developments will be characterized by incremental improvements to existing frameworks rather than revolutionary changes, with emphasis on practical deployment and measurable outcomes.
Mid-Term Outlook (3-5 Years)
As technologies mature and organizations adapt, more substantial transformations will emerge in how security is approached and implemented:
- Significant transformation of business processes through advanced technologies
- new digital business models emerging
This period will see significant changes in security architecture and operational models, with increasing automation and integration between previously siloed security functions. Organizations will shift from reactive to proactive security postures.
Long-Term Outlook (5+ Years)
Looking further ahead, more fundamental shifts will reshape how cybersecurity is conceptualized and implemented across digital ecosystems:
- Fundamental shifts in how technology integrates with business and society
- emergence of new technology paradigms
These long-term developments will likely require significant technical breakthroughs, new regulatory frameworks, and evolution in how organizations approach security as a fundamental business function rather than a technical discipline.
Key Risk Factors and Uncertainties
Several critical factors could significantly impact the trajectory of software dev evolution:
Organizations should monitor these factors closely and develop contingency strategies to mitigate potential negative impacts on technology implementation timelines.
Alternative Future Scenarios
The evolution of technology can follow different paths depending on various factors including regulatory developments, investment trends, technological breakthroughs, and market adoption. We analyze three potential scenarios:
Optimistic Scenario
Rapid adoption of advanced technologies with significant business impact
Key Drivers: Supportive regulatory environment, significant research breakthroughs, strong market incentives, and rapid user adoption.
Probability: 25-30%
Base Case Scenario
Measured implementation with incremental improvements
Key Drivers: Balanced regulatory approach, steady technological progress, and selective implementation based on clear ROI.
Probability: 50-60%
Conservative Scenario
Technical and organizational barriers limiting effective adoption
Key Drivers: Restrictive regulations, technical limitations, implementation challenges, and risk-averse organizational cultures.
Probability: 15-20%
Scenario Comparison Matrix
Factor | Optimistic | Base Case | Conservative |
---|---|---|---|
Implementation Timeline | Accelerated | Steady | Delayed |
Market Adoption | Widespread | Selective | Limited |
Technology Evolution | Rapid | Progressive | Incremental |
Regulatory Environment | Supportive | Balanced | Restrictive |
Business Impact | Transformative | Significant | Modest |
Transformational Impact
Technology becoming increasingly embedded in all aspects of business operations. This evolution will necessitate significant changes in organizational structures, talent development, and strategic planning processes.
The convergence of multiple technological trends—including artificial intelligence, quantum computing, and ubiquitous connectivity—will create both unprecedented security challenges and innovative defensive capabilities.
Implementation Challenges
Technical complexity and organizational readiness remain key challenges. Organizations will need to develop comprehensive change management strategies to successfully navigate these transitions.
Regulatory uncertainty, particularly around emerging technologies like AI in security applications, will require flexible security architectures that can adapt to evolving compliance requirements.
Key Innovations to Watch
Artificial intelligence, distributed systems, and automation technologies leading innovation. Organizations should monitor these developments closely to maintain competitive advantages and effective security postures.
Strategic investments in research partnerships, technology pilots, and talent development will position forward-thinking organizations to leverage these innovations early in their development cycle.
Technical Glossary
Key technical terms and definitions to help understand the technologies discussed in this article.
Understanding the following technical concepts is essential for grasping the full implications of the security threats and defensive measures discussed in this article. These definitions provide context for both technical and non-technical readers.