How to replace Go worker goroutines with Docker containers and then scale to Kubernetes pods?

How to replace Go worker goroutines with Docker containers and then scale to Kubernetes pods?
typescript
Ethan Jackson

I have a Go application that currently runs with the following structure:

  • A scheduler goroutine is responsible for managing task scheduling.
  • The scheduler spawns a fixed number of worker goroutines.
  • These workers perform tasks in a round-robin fashion.

Now, I want to scale this application in two steps:

Step 1: Instead of spawning goroutines to do the work, I want the scheduler to spin up Docker containers (each container performs the same task that a worker would have done). Once the container completes the task, it can exit.

Step 2: Eventually, I want to replace Docker containers with Kubernetes pods — so that each task is handled by a separate Kubernetes pod instead of a container directly spawned by the Go app.

What I want to know:

  1. How should I structure this migration?
    • What are the tools/libraries/packages in Go to programmatically run Docker containers?
    • What’s the best way to manage container lifecycle from Go?
  2. How can I run a Kubernetes pod from Go?
    • How do I create pods dynamically from Go code and pass task parameters to them?
  3. How should I handle results and monitoring?
    • How can I capture the output from the container or pod (stdout, exit status, etc.)?
    • How do I know when a container or pod has finished its work?
  4. Any best practices or architectural advice on managing this kind of scaling (especially round-robin worker logic moving to distributed containers/pods)?

Answer

To implement round-robin worker logic in Kubernetes, you need to understand that Kubernetes by itself does not provide round-robin task scheduling across pods — it schedules pods onto nodes, not tasks into pods. However, you can build round-robin task dispatch logic at the application level. Here’s how to design and implement it.

I think, the best way to achieve this would be to use

Message Queue with Round Robin Semantics

  1. All workers consume from a queue (e.g., Redis, NATS JetStream, RabbitMQ), and each worker:

  2. Picks the next task

  3. Marks it as claimed (atomic op)

Producer and consumer code shared below

/* Producer */ package main import ( "log" "os" "time" amqp "github.com/rabbitmq/amqp091-go" ) func failOnError(err error, msg string) { if err != nil { log.Fatalf("%s: %s", msg, err) } } func main() { conn, err := amqp.Dial("amqp://guest:guest@localhost:5672/") failOnError(err, "Failed to connect to RabbitMQ") defer conn.Close() ch, err := conn.Channel() failOnError(err, "Failed to open channel") defer ch.Close() q, err := ch.QueueDeclare( "task_queue", true, // durable false, false, false, nil, ) failOnError(err, "Failed to declare queue") for i := 1; i <= 10; i++ { body := "Task #" + string(rune(i+'0')) err = ch.Publish( "", q.Name, false, false, amqp.Publishing{ DeliveryMode: amqp.Persistent, ContentType: "text/plain", Body: []byte(body), }, ) failOnError(err, "Failed to publish message") log.Printf("Sent: %s", body) time.Sleep(1 * time.Second) } } /* Consumer */ package main import ( "log" "time" amqp "github.com/rabbitmq/amqp091-go" ) func failOnError(err error, msg string) { if err != nil { log.Fatalf("%s: %s", msg, err) } } func main() { conn, err := amqp.Dial("amqp://guest:guest@localhost:5672/") failOnError(err, "Failed to connect to RabbitMQ") defer conn.Close() ch, err := conn.Channel() failOnError(err, "Failed to open channel") defer ch.Close() q, err := ch.QueueDeclare( "task_queue", true, // durable false, false, false, nil, ) failOnError(err, "Failed to declare queue") err = ch.Qos(1, 0, false) // fair dispatch failOnError(err, "Failed to set QoS") msgs, err := ch.Consume( q.Name, "", false, // manual ack false, false, false, nil, ) failOnError(err, "Failed to register consumer") forever := make(chan bool) go func() { for d := range msgs { log.Printf("Received: %s", d.Body) time.Sleep(2 * time.Second) // simulate work log.Printf("Done") d.Ack(false) } }() log.Printf(" [*] Waiting for messages. To exit press CTRL+C") <-forever }

Related Articles