Cookie Consent by Free Privacy Policy Generator 📌 LLMs, DevOps, and Big Data Musings


✅ LLMs, DevOps, and Big Data Musings


💡 Newskategorie: Programmierung
🔗 Quelle: dev.to

I have had the privilege of talking with several folks over the past few weeks around the state of the world. Well, the tech world to be precise. I was inspired by my chat this morning with the wonderful Beth Glenfield. We talked about Product, trends, AI, and potential future pain points.

One of the things both of us love about being a product manager is that we can watch trends. See the ebb and flow of ideas and how they are implemented. See the gaps and the impact of those gaps.

Remember when, at the beginning of the DevOps movement before the cloud was pervasive and Sys Admins didn’t see a reason to automate? They didn’t because while automation allowed for efficiency it didn’t necessarily reduce the complexity. So the first things they automated were the annoying boring processes and tasks. Automation adds layers and layers add complexity. Cloud came along and forced the need for different layers and complexity. Cloud is faster and faster is better, right? For many, to be competitive it felt like you HAD to move to the cloud. Which created a movement. A successful one.

Now we are on the precipice of another movement. LLMs and ML are that movement. You can see it. I can almost hear the earnest manager I spoke to years ago. He asked, “Do we have the DevOps now?” I’m excited to hear what the LLM version of this is going to be.

But here’s a lesson from the DevOps movement to the next wave of DevOps. Be mindful of the bottlenecks you are creating. If you are only making one team 10x more efficient but not the others, you haven’t really gained anything. This is where internal discoverability, guardrails, reliability, tools that allow teams to reduce toil from code to security, and tools that help your TPM will become critical. These products are out there, right now. While some have LLM features, others are there because you frankly have a gap today and that gap is only going to get wider as your usage of LLMs becomes more pervasive.

Further, AI tooling will need to move beyond, I can code a thing or “create art” and become that dedicated data store or focus. LLMs to be effective need clean, unbiased, and relevant data to provide the answers that make the most sense. Our grand AI experiment will need to incorporate the lessons from the past. Learning from the big data camp, or even the battle between Betamax and VHS.

Image description
While Devin might be able to code LLMs are always lagging. Humans are the innovators. Moreover, if you don’t have juniors who learn how to code, debug, and write tests, you don’t get mid-level devs who start to architect and build more complex features, or seniors and principals who do some gnarly problem-solving and are systems thinkers. It serves no one to replace a person with “AI”. I’m pretty sure that’s the start of a dystopian story.

Why do I mention this? I mention it because I don’t believe “AI” is going to replace us as people. I do think, similar to DevOps, it is going to change how we work and the kind of work we do. That’s not a bad thing. It means systems thinking, internal discoverability, reliability, operational excellence, and context are going to rule the next wave.

We need those system thinkers more than ever. What if “AI” turned every dev into a 10x dev. Do we have 10x CS, PS, TPMs, RelEng, Ops, QA, AppSec, Product, Marketing, Sales, Product Marketing, Tech writers, Pre and Post sales eng? Those systems thinkers will help us avoid the impending bottleneck(s).

Building a product is a team effort. I think we’ve gotten over the factory analogy, right? Products from idea to support to sunset are complex. It is Jeremy Bearimy in the best way possible.

Image description

LLMs are here to stay. What’s going to be interesting is seeing what we’ve learned from our past mistakes. What I know is, that the leaders I’ve met who are building the Next Wave tools understand the space, how we got here, and where we need to go. In short, I'm excited for the future.

...

✅ LLMs, DevOps, and Big Data Musings


📈 57.26 Punkte

✅ Working with Azure DevOps using the Azure DevOps CLI | The DevOps Lab


📈 27.6 Punkte

✅ Musings From a Coffee Bar: Threat Modeling Tips for Open Campus Security


📈 26.11 Punkte

✅ Because You Cant Run, You Cant Hide: Some Musings on API Design || James Powell


📈 26.11 Punkte

✅ Musings on the Microsoft Component Firmware Update (CFU) Protocol


📈 26.11 Punkte

✅ HPR3222: Musings about writing a book about the Odoo software suite


📈 26.11 Punkte

✅ Musings of a Former State CTO Part 1: The Origin Story


📈 26.11 Punkte

✅ Musings of a Former State CTO Part 2: Public Service Meets Cybersecurity


📈 26.11 Punkte

✅ Musings of a Former State CTO Part 3: The Cybersecurity Evolution


📈 26.11 Punkte

✅ Musings of a Former State CTO Part 4: Looking Forward


📈 26.11 Punkte

✅ Musings on C & C++ Declarations


📈 26.11 Punkte

✅ Even LLMs need education—quality data makes LLMs overperform


📈 25.65 Punkte

✅ AI and LLMs - Think of the Children | AI, LLMs and Some Hardware Hacking | News - PSW808


📈 25.65 Punkte

✅ What are Large Language Models (LLMs)? Applications and Types of LLMs


📈 24.14 Punkte

✅ What are LLMs, Local LLMs and RAG?


📈 24.14 Punkte

✅ Using LLMs to evaluate LLMs


📈 22.63 Punkte

✅ Researchers automated jailbreaking of LLMs with other LLMs


📈 22.63 Punkte

✅ Shifting Tides: The Competitive Edge of Open Source LLMs over Closed Source LLMs


📈 22.63 Punkte

✅ On scalable oversight with weak LLMs judging strong LLMs


📈 22.63 Punkte

✅ Large Models Meet Big Data: Spark and LLMs in Harmony


📈 21.95 Punkte

✅ Terraform and Azure DevOps – Delivering a continuous and automated deployment | The DevOps Lab


📈 21.42 Punkte

✅ Gene Kim and the Rise and Fall of DevOps | Conversations From DevOps Enterprise Summit


📈 21.42 Punkte

✅ What Is DevOps Maturity, and How Does It Relate to DevOps Security?


📈 19.91 Punkte











matomo

Datei nicht gefunden!