Tech Culture News and Analysis | The New Stack https://thenewstack.io/tech-culture/ Sun, 24 Sep 2023 19:30:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 What Happens When 116 Makers Reimagine the Clock? https://thenewstack.io/what-happens-when-116-makers-reimagine-the-clock/ Sun, 24 Sep 2023 13:00:45 +0000 https://thenewstack.io/?p=22718671

As 2023 began, the Autodesk Instructables makers site launched a clock-building contest. Users would create and share instructions for an

The post What Happens When 116 Makers Reimagine the Clock? appeared first on The New Stack.

]]>

As 2023 began, the Autodesk Instructables makers site launched a clock-building contest. Users would create and share instructions for an innovative clock, to be judged based on creativity, ingenuity, and execution (as well as the clarity and quality of presentation). The contest ultimately attracted 116 entries, with all the imaginative entries displayed online.

It was a fun exercise in the maker spirit, showing what happens when tech enthusiasts and home-brewing hobbyists try to reimagine something in a whole new way, to dream a new dream, and then take it as far as their imagination will go.

The end result was a wild parade of creativity — all happening at that invisible corner where technology and imagination meet.

‘You Rang?’

For example, UK-based “scottydog58” had already spent years working on a replica of the Addams Family mansion that could double as a kind of cuckoo clock. When it’s time to strike the hour, miniature cutouts of Gomez and Morticia Addams slide along a track to the front of the clock’s face. And then windows open to reveal the entire creepy Addams clan — Lurch the butler down in the foyer, Grandmama up in the belfry, crazy Uncle Fester (with a light bulb in his mouth), and even the two children playing a ghoulish game — Pugsley hanging Wednesday.

“Green LEDs are wired into the circuit to illuminate the characters from above,” explain its how-to-build guide on Instructables. Somewhere inside there’s an Arduino Mega 2560 handling all the window-opening and character-animating. (The instructions explain that “Tinkercad allowed me to design the initial circuit layouts and the program code that would be needed to run the clock.”) Two additional “stepper” motors slide Gomez and Morticia into place. And the house itself was built with laser-cut (birch) plywood and medium-density fiberboard.

Off the Wall Clocks

The coding blog I-Programmer quipped that the wacky projects were “off-the-wall clocks.” Earlier this year they’d argued that in 2023 it’s much easier to experiment with your own homegrown timepieces thanks to widespread availability of both low-cost displays and single-board computers. “And the fun is inventing a new way to show the time.” Soon the imaginative entries were coming in from all over the world.

Colombia-based Lina Maria actually crocheted an entire clock face. Bangladesh-based “taifur” built a wristwatch that displays the time in binary numbers.

There were two different gorgeous minimalist clock built mostly of wood. One entrant even provided time of day with an animated Rubik’s cube.

But one of the most unusual entries was the Periodic Table clock. As the day rolls along, its display continually lights up three elements on a periodic table, providing the hours, minutes, and seconds through the atomic numbers of those elements (in the upper left of each element’s square in the table). As the seconds change, the red-highlighted “seconds” square cycles gracefully through the first 60 elements of the table. “If you are familiar with the periodic table or the atomic numbers of the elements, you can tell the time from far away just by seeing the elements!” says the clock’s creator in a how-to-build-it guide on Instructables.

It was the brainchild of Estonia-based interaction designer Görkem Bozkurt (who had a local advertising agency print a precisely-sized periodic table onto plexiglass, ready to be lit up). But the instructions add that the same table could just as easily have been printed on transparent sticker paper, and then attached to the plexiglass by hand.

“The construction is fairly straightforward as long as you have a 3D printer to create the case,” noted I-Programmer. (Numerous Arduino’s now support its real-time clock libraries, which can be connected to the clock’s programmable strip of LED lights — though there is some soldering involved.) And the comments on Bozkurt’s page at Instructables show at least one person who’s already used the page to build their own Periodic Table clock.

But it’s not Bozkurt’s only clock. There’s also a 3D printed “rolling ball clock,” indicating time with the position of steel marbles. The bottom tray of marbles indicates hours, with the one above it counting up minutes (in multiples of five) — with the highest tray counting the remaining minutes. (After 12:59, all the marbles roll to the bottom except the single marble representing 1:00.)

And in 2021 Bozkurt even built an Arduino-powered astronomical clock that displays the phases of the moon.

Nixie Tubes vs. The Matrix

The contest seemed to inspire tributes to technology — both old and new. Dallas-based “Sawdust Willy” built a cuckoo clock from scratch, while Washington state’s Matt Wach delivered the time at a bottom of a rectangle of “falling particles” reminiscent of The Matrix.

And a UK-based maker with the nickname “4dcircuitry” actually built a simulation of a “cold cathode display” — also known as a Nixie tube — by soldering tiny LEDs onto brass wires shaped like numbers. (“This was a lot of work,” they explain in a video about the project…)

Their video contains some classic self-deprecating maker humor. “With all that out of the way, I had to make it function like a clock with some fancy code,” they explain at one point. “So I did what all coders do. I went online and copied the code from a stranger on the internet. I then made a few changes so that I can claim it as mine.” There’s no neon or argon involved — just glass tubes surrounding the numbers — but the effort was enough to win the contest’s grand prize. (Which was a $500 Amazon gift card.)

26 prizes were handed out in total — including 15 “runner-up” winners who received a $50 gift card. But it’s just another typical day in the life of Instructables. Since the site launched in 2005, “We’ve run over 1,000 contests,” according to their official tally, “with more than 19,000 winners to date.”

And judging by the clock-building contest, there are lots and lots of makers who have ended up having a lot of fun.


WebReduce

The post What Happens When 116 Makers Reimagine the Clock? appeared first on The New Stack.

]]>
Open Source Can Deflate the ‘Threat’ of AI  https://thenewstack.io/open-source-can-deflate-the-threat-of-ai/ Fri, 22 Sep 2023 13:44:26 +0000 https://thenewstack.io/?p=22718930

BILBAO, SPAIN — AI should not only be restricted, controlled, and locked down, but developers working with generative language models

The post Open Source Can Deflate the ‘Threat’ of AI  appeared first on The New Stack.

]]>

BILBAO, SPAIN — AI should not only be restricted, controlled, and locked down, but developers working with generative language models underpinning this revolution should rely on open source to ultimately allow for a positive outcome that we can only dream about today.

Of course, there are many naysayers for this assumption, and the examples are many, ranging from politicians with different agendas to frightened public members and other parties, some of whom could have good or bad intentions.

As Jim Zemlin, the Linux Foundation‘s executive director, referenced in his Open Source Summit Europe keynote, Elon Musk was one of over a thousand signers to express his fear of the revolution getting out of control when, in an open letter a few weeks ago, Musk and others proposed a six-month moratorium on AI, beyond which was released by OpenAI with ChatpGPT.

Not to downplay how AI models are already often biased and do not take diversity into account, representing very real risks and potential tragic outcomes for today and tomorrow, ill-founded reactions to fears of what could go wrong are numerous.

The naysayers, as someone said, or start over. Zemlin offered a number of substantive reasons and historical examples involving hip cryptography of why attempting to lock down LLM could potentially be a costly mistake.

“Recently, we’ve heard from different people around the world, largely folks that already have a lot of capital, a lot of GPUs, and good foundation models that we need to take a six-month pause until we’ve figured it out. We’re even hearing calls from folks who are saying, hey, this large language models technology and advanced AI technology is so powerful that in 20 years in the hands of individual actors, people could do terrible things, such as create violent weapons, massive cyberattacks and so forth,” Zemlin said.

“And what I’m telling you today is that kind of fear and that kind of concern that the availability of open source large language models would create some terrible outcome simply isn’t true. That open source always creates sunshine, and that fear as a counterbalance around the code, because it’s not just bad things people do with large language models it is good things too, like discovering advanced drugs, helping manufacturing to become more efficient, using large language models to create more environmentally friendly building construction. Like for every action, there can be a reaction, and we’re already seeing open source immediately start to tackle some of these things people are concerned about when it comes to AI.”

The post Open Source Can Deflate the ‘Threat’ of AI  appeared first on The New Stack.

]]>
Tech Works: When Should Engineers Use Generative AI? https://thenewstack.io/tech-works-when-should-engineers-use-generative-ai/ Fri, 22 Sep 2023 12:00:26 +0000 https://thenewstack.io/?p=22718766

Your developers are already playing around with generative AI. You can’t stop them completely and you probably don’t want to,

The post Tech Works: When Should Engineers Use Generative AI? appeared first on The New Stack.

]]>

Your developers are already playing around with generative AI. You can’t stop them completely and you probably don’t want to, lest they fall behind the curve. After all, you want your developers focusing on meaningful work, and Large Language Model (LLM)-trained code-completion tools like Amazon Web Services’ CodeWhisperer and GitHub’s Copilot have great potential to increase developer productivity.

But, if you don’t have a generative AI policy in place, you’re putting your organization at risk, potentially harming your code quality and reliability.

ChatGPT’s code is inaccurate more often than not, according to an August study by Purdue University researchers. Yet more than 80% of Fortune-500 companies have accounts on it. You could also be putting your reputation on the line. Just look at Samsung, which recently had an accidental leak of sensitive internal source code by an engineer into ChatGPT, which sparked a blanket ban on generative AI assistants. That’s probably a reasonable short-term response, but it lacks long-term vision.

In order to take advantage of this productivity potential, without the PR pitfalls, you have to have a clearly communicated generative AI policy for engineering teams at your organization.

For this edition of Tech Works, I talked to engineering leadership who adopted GenAI early to help you decide how and when to encourage your software engineers to use generative AI, and when to deter them from leveraging chatbots and risking your organization’s privacy and security.

Consumer vs. Enterprise Generative AI Tools

There are many generative AI tools out there — CodeWhisperer, Google’s Bard, Meta AI’s LLaMA, Copilot, and OpenAI’s ChatGPT. But thus far, it’s the latter two that have gotten the buzz within engineering teams. Deciding which GenAI tool to use comes down to how you’re using it.

“People are just dropping stuff in ChatGPT and hoping to get the right answer. It’s a research tool for OpenAI you’re using for free. You’re just giving them free research,” Zac Rosenbauer, CTO and co-founder of a developer documentation platform company Joggr, told The New Stack. (By default, ChatGPT saves your chat history and uses the conversations to further train its models.)

Rosenbauer then showed me a series of slides to explain how an LLM works, which comes off as more guessing the probability of a word to fill in Mad Libs than going for the most accurate response. “That’s why you get really stupid answers,” he said. “Because it’s going to try to just answer the question no matter what.”

Public LLMs are trained to give an answer, even if they don’t know the right one, as shown by the Purdue study that found 52% of code written by ChatGPT is simply wrong, even while it looks convincing. You need to explicitly tell a chatbot to only tell you if it knows the right answer.

Add to this, the very valid concern that employees from any department are copy-pasting personally identifiable information or private company information into a public tool like ChatGPT, which is effectively training it on your private data.

It’s probably too soon for any teams to have gained a competitive edge from the brand-new ChatGPT Enterprise, but it does seem that, due to both quality and privacy concerns, you want to steer your engineers away from regular ChatGPT for a lot of their work.

“The first thing we say to any company we deal with is to make sure you’re using the right GenAI,” said James Gornall, cloud architect lead at CTS, which is focused on enabling Google customer business cases for data analytics, including for Vertex AI, the generative AI offering within an enterprise’s Google Cloud perimeter. “There’s enterprise tooling and there’s consumer tooling.”

“Every company now has GenAI usage and people are probably using things that you don’t know they’re using.”

— James Gornall, CTS

ChatGPT may be the most popular, but it’s also very consumer-focused. Always remind your team: just because a tool is free, doesn’t mean there isn’t a cost for using it. That means never putting private information into a consumer-facing tool.

“No business should be doing anything in Bard or ChatGPT as a strategy,” Gornall told The New Stack. Free, consumer-facing tools are usually harmless at the individual level, but, “the second you start to ask it anything around your business, strategy approach or content creation” — including code — “then you want to get that in something that’s a lot more ring-fenced and a lot more secure.”

More often than not, generative AI benefits come from domain specificity. You want an internal developer chatbot to train on your internal strategies and processes, not the whole world.

“Every company is now kind of a GenAI company. Whether you like it or not, people are probably going to start typing in the questions to these tools because they’re so easy to get a hold of,” Gornall said.

“You don’t even need a corporate account or anything. You can register for ChatGPT and start copying and pasting stuff in, saying ‘Review this contract for me’ or, in Samsung’s case, ‘Review this code,’ and, invariably, that could go very badly, very, very quickly.”

You not only increase privacy and security by staying within your organizational perimeters, you increase your speed to value.

GenAI “can save a lot of time; for example, generating documents or generating comments — things that developers generally hate doing. But other times, we will try using this and it’ll actually take us twice as long because now we’re having to double-check everything that it wrote.”

— Ivan Lee, Datasaur

Don’t use a consumer-facing GenAI tool for anything that is very proprietary, or central to how your business operates, advised Karol Danutama, vice president of engineering at Datasaur. But, if there is something that is much more standardized where you could imagine 100 other companies would need a function just like this, then he has advised his team to feel more comfortable using LLMs to suggest code.

Don’t forget to factor in ethical choices. A company-level AI strategy must cover explainability, repeatability and transparency, Gornall said. And it needs to do so in a way that’s understood by all stakeholders, even your customers.

Context Is Key to Developer Flow State

You will always gain more accuracy and speed to value if you are training an existing LLM within the context of your business, on things like internal strategies and documentation. A context-driven chatbot — like the enterprise-focused Kubiya — needs to speak to the human content creator, and hopefully speed up or erase the more mundane parts of developers’’ work. Early engineering use cases for generative AI include:

  • Creating code snippets.
  • Generating documentation and code samples.
  • Creating functions.
  • Importing libraries.
  • Creating classes.
  • Generating a wireframe.
  • Running quality and security scans
  • Summarizing code.

It has the potential to “really get rid of a lot of the overhead of the 200 characters you have to type before you start on a line of code that means anything to you,” Gornall said. You still have to manually review it for relevance and accuracy within your context. “But you can build something real by taking some guidance from it and getting some ideas of talking points.”

For coding, he said, these ideas may or may not be production-ready, but generative AI helps you talk out how you might solve a problem. So long as you’re using an internal version of GenAI, you can feed in your coding standards, coding styles, policy documents and guideline templates into the chatbot. It will add that content to its own continuous improvement from external training, but keep your prompts and responses locked up.

“You can scan your entire codebase in a ridiculously quick amount of time to say, ‘Find me anything that doesn’t conform to this,’ or ‘Find me anything that’s using this kind of thing that we want to deprecate,’” Gornall said.

Don’t close off your dataset, he advised. You need to continue to train on third-party data too, lest you create an “echo chamber within your model where, because you’re just feeding it your wrong answers, it is going to give you wrong answers.” With the right balance of the two, you can maintain control and mitigate risk.

Generative AI for Documentation

One of the most in-demand productivity enablers is documentation. Internal documentation is key to self-service, but is usually out of date — if it even exists at all — and difficult to find or search.

Add to that, documentation is typically decoupled from the software development workflow, triggering even more context switching and interrupted flow state to go to Notion, Confluence or an external wiki to look something up.

“If you know about developers, if it’s not in their [integrated development environment], if it’s not in their workflow, they will ignore it,” Rosenbauer said.

This makes docs ripe for internal generative AI.

“We felt that developer productivity recently had suffered because of how much was asked to do,” Rosenbauer said. “The cognitive load of the developer is so much higher than it was, in my opinion, 10 or 15 years ago, even with a lot more tooling available.”

“Generative AI is not helping the core current role of an engineer, but it’s getting rid of a lot of the noise. It’s getting rid of a lot of the stuff that can take time but not deliver value.”

—James Gornall, CTS

He reflected on why he and Seth Rosenbauer, his brother and Joggr co-founder quit their jobs as engineering team leads just over a year ago.

For example, Zak Rosenbauer noted, “DevOps, though well-intended,  was very painful for a lot of non-DevOps software engineers. Because the ‘shift left’ methodology is important — I think of it as an empowering thing — but it also forces people to do work they weren’t doing before.”

So the Rosenbauers spent the following six months exploring what had triggered that dive in developer productivity and increase in cognitive load. What they realized is that the inadequacy or non-existence of internal documentation is a huge culprit.

As a result, they created Joggr, a generative AI tool — one that “regenerates content,” Zac Rosenbauer said. One of the company’s main focuses is automatically regenerating code snippets to maintain documentation, descriptions, portions of text, links to code and more. About a third of Joggr’s customers currently are working in platform engineering and they expect that practice to grow.

Will GenAI Take Jobs Away?

“The question we get asked quite a lot is: Is it taking our jobs? I don’t think so. I think it’s changing people’s jobs and people will do well to learn how to work with these things and get the most out of them, but I think it is still very early days,” Gornall said.

“Generative AI is not helping the core current role of an engineer, but it’s getting rid of a lot of the noise. It’s getting rid of a lot of the stuff that can take time but not deliver value.”

It is unlikely that the rate of development and adoption of generative AI will slow down, so your organization needed a GenAI policy yesterday. And it must include a plan to train engineers about it.

Just like his search engine native generation learned with the help of Google and StackOverflow, Ivan Lee, CEO and founder of Datasaur, believes that the next-gen CompSci grads will be asking ChatGPT or Copilot. Everyone on a team will have to level up their GenAI knowledge. Don’t forget, identifying flaws in other people’s code is a key part of any engineering job — now you just have to apply that skill to machine-written code, too.

Lee added, “We need to be very careful about knowing how to spot check, understanding the strengths of this technology and the weaknesses.”

The post Tech Works: When Should Engineers Use Generative AI? appeared first on The New Stack.

]]>
Graydon Hoare Remembers the Early Days of Rust https://thenewstack.io/graydon-hoare-remembers-the-early-days-of-rust/ Sun, 10 Sep 2023 13:00:05 +0000 https://thenewstack.io/?p=22717542

In Late May Rust‘s creator, Graydon Hoare, took a look back at the early days of Rust on his personal

The post Graydon Hoare Remembers the Early Days of Rust appeared first on The New Stack.

]]>

In Late May Rust‘s creator, Graydon Hoare, took a look back at the early days of Rust on his personal blog. Hoare started by reminding readers that “I haven’t been involved in the project for a decade,” so “I think anything I say needs to be taken with a big grain of salt, but I do keep up somewhat with the comings and goings…”

Rust started as Hoare’s personal project in 2006, later attracting many more contributors and Mozilla’s official sponsorship in 2009, according to a recent history published by MIT Technology Review.

In two back-to-back blog posts, Hoare offered at least a few quick glimpses into how a programming language evolves. Interestingly, the second post was titled “The Rust I Wanted Had No Future,” and toward the end Hoare writes succinctly that “Divergence in preferences are real! My preferences are weird. You probably wouldn’t have liked them.”

It all provides a fascinating look at some quirks that got ironed out along the way — and how the early days of a language can differ from its present-day incarnation…

Back to the Future

Hoare’s personal blog covers a wide variety of topics. In 2023 he’s written four posts, the first about amateur ham radio and then corporate-employed maintainers with bad incentives for open source contributions (when instead those employers should just”let maintainers be maintainers.”)

But Hoare shared some quick thoughts about Rust in the next two posts. The first post shared a recent question to Hoare: “Do you ever wish you had made yourself Benevolent Dictator For Life of the Rust project?” And would there be less drama on the project if he had?

“No,” Hoare answered both questions. Hoare added further down that “I don’t like attention or stress, I was operating near my limits while I was project tech lead back in 2009-2013… Additionally, I’ve no reason to believe I would have set up strong or healthy formal mechanisms for decision making, conflict management or delegation and scaling.”

But then that post turned up in Reddit’s Rust subreddit, where Hoare is a sometime contributor. To a user asking if Rust development was slow, Hoare once responded “In terms of major features, it’s good for it to slow down.”

In the discussion on his own post, Graydon had commented, “Just don’t get me started on angle brackets for type parameters and the single apostrophe for lifetimes!”

One Reddit user insisted on following up. Hoare clarified that “they were just syntax arguments I was on the losing side of.” And he even supplied a link to his “Rust prehistory” GitHub repository, showing 13-year-old Rust code where square brackets had actually been implemented for type parameters, adding that “I personally think square brackets are the obvious choice for type parameters.”

Hoare was also opposed to explicit lifetimes for references, and “was talked into them as ‘something that will almost always be inferred so it doesn’t matter what the syntax is, nobody will ever write them’. Obviously that .. didn’t quite happen.” Almost as an afterthought, Hoare wrote in his Reddit comment, “I should probably do a blog post someday ‘the Rust I wanted’ and clarify both that (a) the Rust we got is fairly far from the one I wanted on quite a few axes and (b) making that observation in no way detracts from my feelings of overwhelming gratification at the success of the language!”

Surpassing C++

Days later Hoare did write that blog post, emphasizing that the Rust he’d wanted “would probably have been extremely unsatisfying to everyone involved, and it never would have gone anywhere…

“[D]on’t get me wrong: like the result. It’s great. I’m thrilled to have a viable C++ alternative, especially one people are starting to consider a norm, a reasonable choice for day-to-day use. I use it and am very happy to use it in preference to C++. But…!”

In the post, Hoare includes a list of “just a few of the places I literally didn’t want, and/or currently don’t like, what Rust wound up with.” For example, in his “Complex Grammar” section, Hoare complains Rust is still hard to parse. “It’s easier to work with than C++, but that’s fairly faint praise. I lost almost every argument about this, from the angle brackets for type parameters to the pattern-binding ambiguity to the semicolon and brace rules to … ugh I don’t even want to get into it. The grammar is not what I wanted. Sorry.”

Another example: the way Rust handles types. Hoare prefers “structural” typing (where objects have compatible types if their structure is the same — regardless of whether they’ve been declared with the same type name). Hoare also reveals that “the language initially had (and I hoped it would have again) complier-emitted ‘type descriptors’ that the user could invoke a reflection operator on.”

Hoare also had some thoughts on how Rust handles decimal floating point numbers. “[B]asically every language discovers the long way that financial math is special and, at great length, eventually adds a decimal type. I wanted Rust to do this upfront, but it was perpetually deferred to libraries. There are a few, but it’d be better to have one built in…”

There were more examples, but Hoare wasn’t trying to enumerate specific differences between his vision and The Rust We Got. Instead, “The point is to indicate thematic divergence.

“The priorities I had while working on the language are broadly not the revealed priorities of the community that’s developed around the language in the years since,” Hoare writes, “or even that were being revealed in the years during.

“I would have traded performance and expressivity away for simplicity — both end-user cognitive load and implementation simplicity in the compiler — and by doing so I would have taken the language in a direction broadly opposed to where a lot of people wanted it to go.”

Hoare even provided specifics:

  • “A lot of people in the Rust community think ‘zero cost abstraction’ is a core promise of the language. I would never have pitched this and still, personally, don’t think it’s good. It’s a C++ idea and one that I think unnecessarily constrains the design space… I would have traded lots and lots of small constant performance costs for simpler or more robust versions of many abstractions. The resulting language would have been slower.”
  • “Similarly I would have traded away so much expressivity that it would probably make most modern Rust programmers start speaking about the language the way its current critics do: it’d feel clunky and bureaucratic, a nanny-state language that doesn’t let users write lots of features in library code at all, doesn’t even trust programmers with simple constructs like variable shadowing, environment capture or inline functions.”

Perhaps in a fittingly ironic coda, when the blog post turned up in Reddit’s programming languages subreddit, it attracted a mix of responses. One user even commented, “I really wish the Rust he wanted existed, it sounds beautiful.”

But another commenter seemed more rooted in the reality of today. “His Rust wouldn’t have been better, it would have been different

“I really like today’s Rust… I love performance/pedal-to-the-floor code, and today’s Rust offers me that in a nice package.”

The post Graydon Hoare Remembers the Early Days of Rust appeared first on The New Stack.

]]>
Entrepreneurship for Engineers: a Post-Layoff Startup? https://thenewstack.io/entrepreneurship-for-engineers-a-post-layoff-startup/ Fri, 08 Sep 2023 12:00:44 +0000 https://thenewstack.io/?p=22717167

Let’s face it: the past year has been a rough time for a lot of people who work in tech.

The post Entrepreneurship for Engineers: a Post-Layoff Startup? appeared first on The New Stack.

]]>

Let’s face it: the past year has been a rough time for a lot of people who work in tech. For some entrepreneurs, a layoff is exactly the kick in the pants they need to start their own company.

What are some pieces of advice for recently laid off folks who think entrepreneurship might be the next step?

Check Your Bank Balance.

If you’re considering building a product company — either one that ends up being bootstrapped or one that gets outside venture funding — you’ll likely need some savings or a generous severance package. If you don’t have that, chances are you’re better off starting out by selling services of some kind first.

You can start selling services immediately and get cash coming in the door quickly; it’s fairly unlikely you’ll be able to do that with a product company.

Even if your ultimate goal is to build a product company, you can start with a consulting or freelance business and work on the product on the side. But in fact, the need for consistent cashflow is one reason that immediately after a layoff, when you might have a severance package and at the very least could collect unemployment, can be a good time to start a company. 

Mark Feldman, CEO and co-founder of RevenueBase, a corporate database company, doesn’t think he’d have had the nerve to start his own company if it hadn’t been for a layoff and severance package that gave him the luxury of building something he thought was useful without a huge amount of pressure.

Take Your Time.

On the other hand, the psychological aftermath of a layoff is simply not the same as quitting your job because you’re ready to take the leap to entrepreneurship.

“I took some time to think about what I wanted to do for the next step,” said Sesheeka Selvaratnam, founder of AI Query. Psychologically, he cautioned, you’ll need to do a bit of a reset before diving into something new, to avoid making rash decisions — and just to figure out what you would be interested in building.

Start with an MVP.

Both Feldman and Selvaratnam launched a product that was very much a “minimum viable product,” or MVP. 

Don’t wait until your product is perfected to introduce it to potential customers, Selvaratnam urged.

“Get it out there,” he said. “My first product was more duct tape and string. It was really crappy. And when I saw folks paying for it, that’s when I went into 2.0.”

Even if you have a severance package or savings to fall back on, if your entrepreneurial journey has been forced on you, you probably are short on time; you want to be able to prove as quickly as possible that you have something that people will pay for.

This is just as valid if you’re planning on getting venture funding. Investors are more likely to bite if they see real traction, and some revenue, from a minimum viable product.

When Feldman showed an MVP of the bespoke business databases he planned to create through his startup, investors were impressed by the demand for a product that was incredibly basic and saw huge potential for the idea if it was actually packaged as software.

“I was getting feedback from investors, like, ‘you don’t even have a software product, you’re just selling Microsoft Excel files for $50,000,” he said. But the spreadsheets were enough to prove there was demand in the market for the type of data RevenueBase was selling.

Stick with It, but Be Prepared to Pivot.

A big part of entrepreneurship is mindset, and if your entrepreneurial journey started by accident, because of a layoff, rather than by design, keeping yourself in the right mindset can be challenging.

“Sometimes you’re beating yourself up and saying, ‘What are we doing here?’” Selvaratnam said. “You’re just trying to stay with it.”

One of Feldman’s biggest regrets was not believing in himself, particularly at the beginning. “It’s easy to talk yourself out of starting the business,” he said.

Even though he’d read about founders raising money and building big companies all the time, he never thought that he was the kind of person who’d be able to do that.

At the same time, sometimes an idea just isn’t good. “You have to realize when to pack it up as well,” he said. “Don’t keep putting massive amounts of energy over and over again. If you’re not seeing something, it’s the universe telling you you have to pivot.

Move into a Leadership Mindset.

At a certain point, the backstory behind the company becomes irrelevant — no one cares that this company wouldn’t have happened if not for a layoff once you’ve raised a funding round and started hiring people.

The only difference is subtle: the type of person who’s pushed into entrepreneurship by a layoff may be slightly different in mindset from someone who laid out a plan to start a company and then left their job when the time was right.

According to Feldman, one of the challenges with building a company has been not just acquiring the skills needed to be a CEO, but rather figuring out what those skills are.

“Over time, as the company evolves, you have to learn to see around corners. You have to learn how to navigate a board of directors and investors,” he said. “And you have to really learn to know yourself and your values and how you want to lead the company.”

The post Entrepreneurship for Engineers: a Post-Layoff Startup? appeared first on The New Stack.

]]>
Open Source Needs Maintainers. But How Can They Get Paid? https://thenewstack.io/open-source-needs-maintainers-but-how-can-they-get-paid/ Wed, 06 Sep 2023 10:00:59 +0000 https://thenewstack.io/?p=22717420

Jordan Harband is the sort of person the tech industry depends on: a maintainer of open source software projects. Lots

The post Open Source Needs Maintainers. But How Can They Get Paid? appeared first on The New Stack.

]]>

Jordan Harband is the sort of person the tech industry depends on: a maintainer of open source software projects.

Lots of them — by his count, about 400.

Harband, who has worked at Airbnb and Twitter, among other companies, was laid off from Coinbase more than a year ago. The Bay Area resident is now a contractor for the OpenJS Foundation, as a security engineering champion.

He also gets paid for some of his freelance open source maintenance work, by Tidelift and other sponsors, labor that he estimates takes up 10 to 20 hours a week.

His work is essential to the daily productivity of developers around the globe. In aggregate, some projects he maintains, he told The New Stack, are responsible for between 5% and 10% of npm’s download traffic.

But spending all of his time on his open source projects, he said, would not be possible “without disrupting my life and my family and our benefits and lifestyle.”

Case in point: his COBRA health insurance benefits from Coinbase run out at the end of the year. “If I don’t find a full-time job, I have to find my own health insurance,” he said. “That’s just not a stressor that should be in anyone’s life, of course, but certainly not in the life of anyone who’s providing economic value to so many companies and economies.”

Harband is the sole maintainer of many of the projects he works on. He’s not the only developer in that situation. And that reliance on an army of largely unpaid hobbyists, he said, is dangerous and unsustainable.

“We live in capitalism, and the only way to ensure anything gets done is capital or regulation — the carrot or the stick,” he said. “The challenge is that companies are relying on work that is not incentivized by capital or forced by regulation. Nobody’s held to task, other than by market forces, if they have ship poor or insecure software.”

And, Harband added, “There is a lack of enforcement of fiduciary duty on companies that use open source software — which is basically all of them — because it’s their fiduciary duty to invest in their infrastructure. Open source software is everyone’s infrastructure, and it is wildly under-investment.”

The ‘Bus Factor’ and the ‘Boss Factor’

The world’s reliance on open source software — and the people who maintain it — is no secret. For instance, Synopsys’ 2023 open source security report, which audited more than 1,700 codebases across 17 industries, found that:

  • 96% of the codebases included open source software.
  • Just over three-quarters of the code in the codebases — 76%— was open source.
  • 91% of code bases included open source software that had had no developer activity in the past two years — a timeframe that could indicate, the report suggested, that an open source project is not being maintained at all.

This decade, there have been a number of attempts to set standards for open source security: executive orders by the Biden administration, new regulations from the European Union, and the formation of the Open Source Security Foundation (OpenSSF), and the release of its security scorecard.

In February 2022, the U.S. National Institute of Standards and Technology (NIST) released its updated Secure Software Development Framework, which provides security guidelines for developers.

But the data show that not only are open source maintainers usually unaware of current security tools and standards, like software bills of materials (SBOMs) and supply-chain levels for software artifacts (SLSA), but they are largely unpaid and, to a frightening degree, on their own.

A study released in May by Tidelift found that 60% of open source maintainers would describe themselves as “unpaid hobbyists.” And 44% of all maintainers said they are the only person maintaining a project.

“Even more concerning than the sole maintainer projects are the zero maintainer projects, of which there are a considerable amount as well that are widely used,” Donald Fischer, CEO and co-founder of Tidelift, told The New Stack. “So many organizations are just unaware because they don’t even have telemetry, they have no data or visibility into that.”

In Tidelift’s survey, 36% of maintainers said they have considered quitting their project; 22% said they already had.

It brings to mind the morbid “bus factor” — what happens to a project if a sole maintainer gets hit by a bus? (Sometimes this is called the “truck factor.” But the hypothetical tragic outcome is the same.)

An even bigger threat to continuity in open source project maintenance is the “boss factor,” according to Fischer.

The boss factor, he said, emerges when “somebody gets a new job, and so they don’t have as much time to devote to their open source projects anymore, and they kind of let them fall by the wayside.”

Succession is a thorny issue in the open source community. In a report issued by Linux Foundation Research in July, in which the researchers interviews 32 maintainers of some the top 200 critical open source projects, only 35% said their project has a strong new contributor pipeline.

Valeri Karpov has been receiving support from Tidelift for his work as chief maintainer of Mongoose, MongoDB’s object modeler, for the past five years. The Miami resident spends roughly 60 hours a month on the project, he told The New Stack.

He inherited the chief maintainer role in 2014 when he worked at MongoDB as a software engineer. The project’s previous maintainer had decided not to continue with it. Today, a junior developer who also works for Karpov’s application development company contributes to Mongoose, along with three volunteers.

For a primary maintainer who does not have the support he has, he said, there are other challenges in addition to the matter of doing work for free. For starters, there’s finding time to keep up with changes in a project’s ecosystem.

Take Mongoose for example. The tool helps build Node.js applications with MongoDB.  “JavaScript has changed a lot since I started working on Mongoose, Node js as well,” Karpov said. “When I first started working on Mongoose, [JavaScript] Promises weren’t even a core part of the language. TypeScript existed, but still wasn’t a wasn’t a big deal. All sorts of things have changed.”

And if your project becomes popular? You’ll be spending an increasing amount of time offering user support and responding to pull requests, Karpov said: “We get like dozens of inbound GitHub issues per day, Keeping up on that is took some getting used to.”

How Maintainers Can Get Paid

It would seem to be in the best interest of the global economy to pay the sprawling army of hobbyists who build and maintain open source code — compensating them for the time and headaches involved in maintaining their code, recruiting new contributors and making succession plans, and boning up on the latest language and security developments.

But the funding landscape remains patchy. Among the key avenues for financial support:

Open source program offices (OSPOs). No one knows exactly how many organizations maintain some sort of OSPO or other in-house support for their developers and engineers who contribute to open source software.

However, data from Linux Foundation Research studies shows increasing rates of OSPO adoption among public sector and educational institutions, according to Hilary Carter, senior vice president of research and communications at the foundation.

About 30% of Fortune 100 companies maintain OSPOs, according to GitHub’s 2022 Octoverse report on the state of open source software. Frequently, an enterprise will support work only on open source software that is directly related to the employer’s core business.

Why don’t more corporations support open source work? “Many organizations, especially those outside the tech sector, often do not fully understand the advantages of having an OSPO, or the strategic value of open source usage, or the benefits that come from open source contributions,” said Carter, in an email response to The New Stack’s questions.

“Their focus may be short-term in nature, or there may be concerns about intellectual property and licensing issues. Depending on the industry developers work in, highly regulated industries like financial services often have policies that prohibit any kind of open source contribution, even to projects their organizations actively use. Education and outreach are key to changing these perceptions.”

Stormy Peters, vice president of communities at GitHub, echoed the notion that many companies remain in the dark about the benefits of OSPOs.

“An OSPO can help software developers, procurement officers and legal teams understand how to select an open source license, or how non-technology staff can engage local communities in the design and development of a tool,” Peters wrote, in an email response to The New Stack’s questions.

“OSPOs create a culture shift toward more open, transparent and accountable methods of building tech tools to ensure sustainability.”

Foundations. Sometimes foundations created to house an open source project will provide financial support to the maintainers of that project. The Rust Foundation, for example, offers grants to maintainers of that popular programming language.

However, such an approach has its limits, noted Harband. “One of the huge benefits of foundations for projects is that they give you that sort of succession path,” he said. “But private foundations can’t support every project.”

In 2019, Linux Foundation introduced CommunityBridge, a project aimed at helping open source maintainers find funding. The foundation pledged to match organizational contributors up to a cumulative total of $500,000; GitHub, an inaugural supporter, donated $100,000.

But CommunityBridge has evolved into LFX Crowdfunding, part of the foundation’s collaboration portal for open source projects. “Projects receive 100% of donations and manage their own funds, which can support mentorship programs, events or other sustainability requirements,” wrote Carter in her email to TNS.

Carter also pointed to OpenSSF’s Alpha-Omega Project. Launched in February 2022, the project supports maintainers who find and fix security vulnerabilities in critical open source projects. In June, for instance, the project announced that it had funded a new security developer in residence for one year at the Python Software Foundation.

Alpha-Omega, Carter wrote, “creates a pathway for critical open source projects to receive financial support and improve the security of software supply chains.” She urged organizations that have a plan for how funds can be used or can offer funding to get in touch with OpenSSF, which is a Linux Foundation project.

Monetization platforms. Tidelift is among the platforms listed at oss.fund, a crowd-sourced and -curated catalog of sources through which open source maintainers can acquire financial support.

Fischer’s organization pays people “to do these important but sometimes tedious tasks” that open source projects need, he said. “We’ve had success attracting new maintainers to either projects where the primary maintainer doesn’t want to do those things, or in some rare cases is prohibited from doing it because of their employment agreement with somebody else.”

The rates for such work vary, depending on variables including the size of the open source project and how widely it is used. “Our most highly compensated maintainers on the platform are now making north of six figures, U.S. income, off of their Tidelift work,” Fischer said. “Which is great, because that means, basically, independent open source maintainership is now a profession.”

Among the most high-profile monetization platforms is GitHub Sponsors, which was launched in beta in 2019 and became generally available for organizations to sponsor open source workers this past April. As of April, the most recent data available, GitHub reported that Sponsors had raised more than $33 million for maintainers.

In 2022, GitHub reported, nearly 40% of sponsorship funding through the program came from organizations, including Amazon Web Services, American Express, Mercedes Benz and Shopify. In 2023, it added a tool to help sponsors fund several open source projects at once.

The introduction of the bulk-support function and other upgrades have helped GitHub sponsors see the number of organizations funding open source projects double over the past year, according to Peters, of GitHub. More than 3,500 organizations support maintainers through GitHub Sponsors, she wrote in an email to TNS.

“For far too long, developers have had to choose between their careers and open source passions — what they’re paid to do [versus] what they actually love,” Peters wrote. “Open source developers deserve to accelerate their careers at the rate they’re accelerating the world.”

LFX Crowdfunding is integrated with GitHub Sponsors, Carter told TNS in an email. She offered some guidance to help users get connected: “Community members can add and configure your sponsor button by editing a Funding.yml file in your repository’s .github folder, on the default branch.”

“Any mechanism that makes it easy for projects to find the support they need is important, and we’re excited to facilitate funding channels for existing and new initiatives,” she wrote.

Open Source as a Career Accelerator

GitHub, Peters noted, has identified an emerging trend: developers contributing to open source projects as a way to learn how to code and start careers. Two projects the company started in recent months are aimed at helping more of those early-career open source contributors gain support.

In November, GitHub launched GitHub Fund, a $10 million seed fund backed by Microsoft’s M12. The fund supported CodeSee, which maps repositories, and Novu, an open source notifications infrastructure.

“Since GitHub’s investment in CodeSee, the company has added generative AI into the platform, allowing developers to ask questions about a code base in natural language,” Peters wrote.

In April, GitHub started Accelerator, a 10-week program in which open source maintainers got a $20,000 sponsorship to work on their project; in addition, they received guidance and workshops. The project, Peters said, got 1,000 applications from maintainers in more than 20 countries; 32 participants made up the first cohort.

The participants included projects like Mockoon, a desktop API mocking application.

Poly, a Go package for engineering organisms; and Strawberry GraphQL, a Python library for creating GraphQL APIs.

The direct investment, Peters wrote, was a “game changer” for Accelerator participants. “What we found there is very little existing support for open source maintainers who want to make it full time, and building a program that spoke directly to those folks had an oversized impact.

And it’s helping to create a foundation for future funding, she added: “Based on the advice from experts, folks built a path to sustainability — whether that was bootstrapping, VC funding, grants, corporate sponsors or something else.”

Karpov offered an idea for companies that want to support their employees’ work open source projects: providing engineers with an “open source budget” along with the learning budgets that have become a common perk.

“The developers that are typically using these [open source] projects, most actively have zero budget,” he noted. “ They can’t purchase anything — and frankly, frequently, they don’t even know who to ask about purchasing these sorts of things.”

An open source budget, for instance, could be spent on things like GitHub Sponsors. In return for sponsoring an open source maintainer, Karpov said, perhaps “you get a direct communication line with them, to be like, ‘Hey, can you answer this question?’ That could make kind of developers at these big companies much more productive.”

The post Open Source Needs Maintainers. But How Can They Get Paid? appeared first on The New Stack.

]]>
How to Tackle Tool Sprawl Before It Becomes Tool Hell https://thenewstack.io/how-to-tackle-tool-sprawl-before-it-becomes-tool-hell/ Thu, 31 Aug 2023 13:18:03 +0000 https://thenewstack.io/?p=22717081

Today’s digital-first companies know their customers demand seamless, compelling experiences. But incidents are inevitable. That puts the pressure on operations

The post How to Tackle Tool Sprawl Before It Becomes Tool Hell appeared first on The New Stack.

]]>

Today’s digital-first companies know their customers demand seamless, compelling experiences. But incidents are inevitable. That puts the pressure on operations teams already struggling with a heavy workload.

Teams looking for novel ways to tackle these challenges often hit a formidable roadblock in the form of tool sprawl. When the world is on fire, swivel-chairing between tools while trying to get the full picture is the last thing incident responders need as they try to resolve incidents and deliver a great customer experience. But complaining will get them nowhere. The key is to be able to articulate a business case for change to senior leaders.

Into the Valley of Tool Sprawl

Digital operations teams may have a slew of poorly connected tools across their environment, handling event correlation, internal communication, collaboration, workflows, status pages, customer-service case management, ticketing and more. Within each category, there may also be separate tools doing similar things. And they may be built to or governed by different standards, further siloing their operation and slowing things down.

Incident response is a collaborative process. It is also one where seconds and minutes of delay can have a real-world impact on customer experience and, ultimately, revenue and reputation.

Stakeholders from network teams, senior developers, database administrators, customer service and others may need to come together quickly to triage and work through incidents. Their ability to do so is impaired when much time and effort must be expended on simply jumping between tools to get everyone on the same page and in the same place to tackle incidents. That’s not to mention the extra licensing costs, the people to manage and maintain the tool, and the need for additional security patching, etc.

How to Tell the Right Story

Incident responders need a unified platform to tackle issues but without the need to constantly switch context. Integrating and consolidating tools can reduce sprawl and drive simplicity end to end, underpinned by a single set of standards. We’re talking about one common data model and one data flow — enabling teams to reduce costs and go faster, at scale.

Such platforms exist. However, engineers and developers typically don’t have the power to demand change and drive adoption. But that shouldn’t stop them from asking for change. To do this, they must play a longer game, one designed to influence those holding the purse strings. It’s about telling a story in the language that senior executives will understand. That means focusing on business impact.

Humans are naturally story-driven creatures, so senior leaders will likely respond well to real-life examples of how disruptive context switching can be. When speaking to senior leaders, teams should seek to bring problems to life with a story.

Consider the most recent incident that’s affecting customers. How did your team identify and triage the incident? In many cases, teams don’t have a centralized place to capture incident context. This leads to them having to chase information across systems to understand what happened and access the context needed to start remediation. This adds critical time to the process and, in the larger incidents, a loss of customer trust.

Once the issue has been identified, you then have to communicate to the right people. This involves a lot of tools to pull in incident responders and subject matter experts. On top of this, teams also need to communicate about incidents to business and customer stakeholders, which again requires switching between different systems to craft and send messages.

Much of this is manual work that could be automated, but that’s only possible from one place, not disparate systems. The intent isn’t to get to a single pane of glass, which can be a fool’s errand as tools and processes evolve, but building a first pane of glass with the necessary context to immediately resolve issues is a great target.

Using this scenario, don’t be shy in naming all the specific tools and systems teams had to switch between to get to the end goal: uptime. Build a picture of the volume you are having to juggle. It’s also important to weave in the impact of the tool sprawl on the business.

A good starting point is to calculate how much time managing these disparate solutions added to resolving the last SEV 1 incident. Then multiply the figure by how many such incidents there were in the previous 12 months, and then work out how that translates into team costs.

These are the kinds of calculations that can make a big impact on senior decision-makers. It’s about showing the financial and temporal impact of tool sprawl on incident response, and ultimately, the business. If the figure is impactful, it might be enough to start a conversation with the people who can make a difference. The same capability can then be applied to lower severity but more frequently occurring issues, which can solidify your position.

By bringing the problem to life and showing the business and, most importantly, customer impact, teams can have practical conversations with decision-makers that can help to drive change and bring incident response processes into one place.

One Tool to Rule Them All

The valley of tool sprawl is bad enough. But combine it with a deluge of manual processes, and you have a recipe for too much toil and multiple points of failure. Maintaining and managing multiple tools is time-consuming, unwieldy and expensive. It requires continuous training for staff and disrupts critical workflows at a time when seconds often count. In this context, something as simple as an operations cloud to capture incident context from multiple systems of record and automate incident workflows can make a huge difference to responder productivity.

Centralizing on a single, unified platform for digital operations should be a no-brainer. But to get there, teams have to engage senior decision-makers. It’s no use complaining that context switching between tools is causing problems.

The key is to prove it with data and stories to provide irrefutable proof. It’s the way to win over hearts, minds and wallets — and lay a pathway out of the valley of tool sprawl, toward optimized operations.

The post How to Tackle Tool Sprawl Before It Becomes Tool Hell appeared first on The New Stack.

]]>
SRE vs Platform Engineer: Can’t We All Just Get Along? https://thenewstack.io/sre-vs-platform-engineer-cant-we-all-just-get-along/ Wed, 30 Aug 2023 14:48:48 +0000 https://thenewstack.io/?p=22716665

So far, 2023 has been all about doing more with less. Thankfully, tech layoffs — a reaction to sustained, uncontrolled

The post SRE vs Platform Engineer: Can’t We All Just Get Along? appeared first on The New Stack.

]]>

So far, 2023 has been all about doing more with less. Thankfully, tech layoffs — a reaction to sustained, uncontrolled growth and economic downturn — seem to have slowed. Still, many teams are left with fewer engineers working on increasingly complex and distributed systems. Something’s got to give.

It’s no wonder that this year has seen the rise of platform engineering. After all, this sociotechnical practice looks to use toolchains and processes to streamline the developer experience (DevEx), reducing friction on the path to release, so those that are short-staffed can focus on their end game — delivering value to customers faster.

What might be surprising, however, is the rolling back of the site reliability engineering or SRE movement. Both platform teams and SREs tend to work cross-organizationally on the operations side of things. But, while platform engineers focus on that DevEx, SREs focus on reliability and scalability of systems — usually involving monitoring and observability, incident response, and maybe even security. Platform teams are all about increasing developer productivity and speed, while SRE teams are all about increasing uptime in production.

Lately, a lot of organizations are also in the habit of simply waving a fairy wand and — bibbidi-bobbidi-boo!— changing job titles, like from site reliability engineer, sysadmin or DevOps engineer to platform engineer. Is this just because the latter makes for cheaper employees? Or can a change in role really make a difference? How many organizations are changing to adopt a platform as a product mindset versus just finding a new way to add to the ops backlog?

What do these trends actually mean in reality? Is it really SRE versus platform engineering? Are companies actually skipping site reliability engineering and jumping right into a platform-first approach? Or, as Justin Warren, founder and principal analyst at PivotNine, wrote in Forbes, is platform engineering already at risk of “collapsing under the weight of its own popularity, hugged to death by over-eager marketing folk?”

In 2023, we have more important things to worry about than two teams with similar objectives feeling pitted against each other. Let’s talk about where this conflict ends and where collaboration and corporate cohabitation begins.

SREs Should Be More Platform-focused

There’s opportunity in bringing platform teams and SREs together, but a history of friction and frustration can slow that collaboration. Often, SREs can be seen as gatekeepers, while platform engineers are just setting up the guardrails. That could be the shine effect for more nascent platform teams or it can be the truth at some orgs.

“Outside of Google, SREs in most organizations lack the capacity to constantly think about ways to enable better developer self-service or improve architecture and infrastructure tooling while also establishing an observability and tracing setup. Most SRE teams are just trying to survive,” wrote Luca Galante, from Humanitec’s product and growth team. He argues that too many companies are trying to follow suit of these “elite engineering organizations,” and the result is still developers tossing code over the wall, leaving too much burden on SREs to try to catch up.

Instead, Galante argues, a platform as a product approach allows organizations to focus on the developer experience, which, in turn, should lighten the load of operations. After all, when deployed well, platform engineering can actually help support the site reliability engineering team by reducing incidents and tickets via guardrails and systemization.

In fact, Dynatrace’s 2022 State of SRE Report emphasizes that the way forward for SRE teams is a “platform-based solution with state-of-the-art automation and everything-as-code capabilities that support the full lifecycle from configuration and testing to observability and remediation.” The report continues that SREs are still essential in creating a “single version of the truth” in an organization.

A platform is certainly part of the solution, it’s just, as we know from this year’s Puppet State of Platform Engineering Report, most companies have three to six different internal developer platforms running at once. That could leave platform and SRE teams working in isolation.

Xenonstack technical strategy consultancy actually places platform engineering and SRE at different layers of the technical stack, not in opposition to each other. It looks at SRE as a lower level or foundational process, while platform engineering is a higher level process that abstracts out ops work, including that which the SRE team puts in place.

Both SRE and platform teams are deemed necessary functions in the cloud native world. The next step is to figure out how they can not just collaborate but integrate their work together. After all, a focus on standardization, as is inherent to platform engineering, only supports security and uptime goals.

Another opportunity is in how SREs use service level objectives (SLOs) and error budgets to set expectations for reliability. Platform engineers should consider applying the same practices but for their internal customers.

The same Dynatrace State of SRE Report also found that, in 2022, more than a third of respondents already had the platform team managing the external SLOs.

In the end, it is OK if these two job buckets become grayer — even to the developer audience — so long as your engineers can work through one single viewpoint and, when things deviate from that singularity, they know who to ask.

How SREs Built Electrolux’s Platform

Whether a platform enables your site reliability team or your SREs can help drive your platform-as-a-product approach, collaboration yields better results than conflict. How it’s implemented is as varied as an organization’s technical stack and company culture.

Back in 2017, the second largest home appliance maker in the world, Electrolux, shifted toward its future in the Internet of Things. It opened a digital products division to connect eventually hundreds of home goods. This product team kicked off with ten developers and two SREs. Now, in 2023, the company has grown to about 200 developers helping to build over 350 connected products — supported by only seven SREs.

Electrolux teammates, Kristina Kondrashevich, SRE product owner, and Gang Luo, SRE manager, spoke at this year’s PlatformCon about how building their own platform allowed them to scale their development and product coverage without proportionally scaling their SRE team.

Initially, the SREs and developers sat on the same product team. Eventually, they split up but still worked on the same products. As the company scaled with more product teams, the support tickets started to pile up. This is when the virtual event’s screen filled with screenshots of Slack notifications around developer pain points, including service requests, meetings and logs for any new cluster, pipeline or database migration.

Electrolux engineering realized that it needed to scale the automation and knowledge sharing, too.

“[Developers] would like to write code and push it into production immediately, but we want them to be focused on how it’s delivered, how they provision the infrastructure for their services. How do they achieve their SLO? How much does it cost for them?” Kondrashevich said, realizing that the developers don’t usually care about this information.“They want it to be done. And we want our consumers to be happy.”

She said they realized that “We needed to create for them a golden path where they can click one button and get a new AWS environment.”

As the company continued to scale to include several product teams serving hundreds of connected appliances, the SRE team pivoted to becoming its own product team, as Electrolux set out to build an internal developer platform in order to offer a self-service model to all product teams.

Electrolux’s platform was built to hold all the existing automation, as well as well-defined policies, patterns and best practices.

“If developers need any infrastructure today — for example, if they need a Kubernetes cluster or database — they can simply go to the platform and click a few buttons and make some selections, and they will get their infrastructure up and running in a few minutes,” Luo said. He emphasized that “They don’t need to fire any tickets to the SRE team and we ensure that all the infrastructure that gets created has the same kind of policies, [and] they follow the same patterns as well.”

A smiley face pouring into the platform which includes infrastructure templates, service templates, API templates, and internal tools,It also brings into the cloud: Availability, CI/CD, monitoring, SLO, alerting, security, cost and dashboards

“For developers, they don’t need to navigate different tools, they can use the single platform to access most of the resources,” he continued, across infrastructure, services and APIs. “Each feature contains multiple pre-defined templates, which has our policies embedded, so, if someone creates a new infrastructure or creates a new service, we can ensure that it already has what we need for security, for observability. This provided the golden path for our developers,” who no longer need to worry about things like setting up CI/CD or monitoring.

Electrolux’s SRE team actually evolved into a platform-as-a-product team, as a way to cover the whole developer journey. As part of this, Kondrashevich explained, they created a platform plug-in to track cloud costs as well as service requests per month.

“The first intention was to show that it costs money to do manual work. Rather the SRE team can spend time and provide the automation — then it will be for free,” she said. Also, by observing costs via the platform, they’ve enabled cross-organization visibility and FinOps. “Before our SRE team was responsible for cost and infrastructure. Today, we see how our product teams are owners of not only their products but…their expenses for where they run their services, pipelines, etcetera.”

They also measure platform success with continuous surveying and office hours.

In the end, whether it’s the SRE or the product team running the show, “Consumer experience is everything,” Kondrashevich said. “When you have visibility of what other teams are doing now, you can understand more, and you can speak more, and you can share this experience with others.”

To achieve any and all of this, she argues, you really need to understand what site reliability engineering means for your individual company.

The colleagues ended their PlatformCon presentation with an important disclaimer: “You shouldn’t simply follow the same steps as we have done because you might not have the same result.”

The post SRE vs Platform Engineer: Can’t We All Just Get Along? appeared first on The New Stack.

]]>
Java 21 Is Nigh, Whither JavaOne? https://thenewstack.io/java-21-is-nigh-whither-javaone/ Wed, 30 Aug 2023 14:11:21 +0000 https://thenewstack.io/?p=22717011

is about to release Java 21 (JDK 21) next month at its CloudWorld conference. The technology recently reached release candidate

The post Java 21 Is Nigh, Whither JavaOne? appeared first on The New Stack.

]]>

Oracle is about to release Java 21 (JDK 21) next month at its CloudWorld conference. The technology recently reached release candidate status and is ready to go.

Java 21 is a long-term support (LTS) version of the technology, which means it offers longer-term stability as Oracle will provide users with premier support until September 2028 and extended support until September 2031.

No JavaOne

However, the vehicle through which Oracle would typically highlight new technology, the JavaOne conference, is a no-go this year.

This is interesting because Oracle made much fanfare about bringing JavaOne back last year. Back in April 2018, Oracle announced that the JavaOne conference would be discontinued in favor of a more general programming conference called Oracle Code One. So bringing it back last year was a big deal. Now Java 21 will be released at Oracle CloudWorld on Sept. 19.

“We’re reimagining the format of JavaOne and I’ll share more details as soon as I have them,” an Oracle spokesman told The New Stack. “In lieu of JavaOne at CloudWorld this year, we’ll have 10 dedicated sessions for Java at CloudWorld and several Java executives in attendance (in addition to announcing Java 21 on Tuesday, Sept. 19).”

Simply put, JavaOne used to be the “ish”. In the early days of the conference, you could easily run into members of the core Java creation team walking around Moscone Center enjoying rockstar status, including Bill Joy, Arthur van Hoff, Kim Polese, Guy Steele and the “father” of Java, James Gosling.

Sun Microsystems started the annual JavaOne conference in 1996. I attended that one and a majority of the annual events until Oracle shelved it.

Varied Opinions

Now, folks have varying opinions about Oracle’s decision.

“It was a surprise that Oracle decided not to hold a JavaOne event again after relaunching it last year,” said Simon Ritter, deputy CTO at Azul Systems, a Java Development Kit (JDK) provider. “I couldn’t attend but was told that attendance wasn’t as high as Oracle had anticipated. The good news for Java developers is that there are several excellent alternatives offering high-quality presentations from acknowledged Java experts along with networking opportunities. In the US, there is the DevNexus conference, and in Europe, there are several Devoxx events as well as Jfokus, JavaZone and JavaLand. For more local events, there are many Java User Groups (JUGs), so the Java community is more than able to step up and fill the gap left by JavaOne.”

Meanwhile, Holger Mueller, an analyst at Constellation Research, also seemed startled by the move. “It shows that enterprises, even with the best intentions, are reconsidering offerings at a faster pace. I was surprised.”

Oracle’s decision regarding JavaOne was not unexpected by Brad Shimmin, an analyst at Omdia.

“That’s not totally unexpected, given the natural expansion and contraction we see regularly with larger technology providers. IBM, for example, has done the same, merging many conferences into one (IBM Pulse) and then pulling select shows back out if the market shows enough interest/importance for doing so,” he said. “In other words, it wouldn’t surprise me to see a stand-alone JavaOne in the future. That said, in looking through their show materials, it seems this year, the company is looking to blend Java into its numerous developer-oriented technologies, including db/app tools like Apex, its data science family of tools, and of course its database. Given that many data science projects written in the likes of Python and R end up getting refactored in Java for performance, security, etc., this makes good sense.”

Yet it makes no sense to Cameron Purdy, founder and CEO of xqiz.it and a former vice president of development at Oracle.

“It’s worse than short-sighted to chop one of the best-known and most influential developer conferences in the world. And for a software company, ‘neglect developers’ is a losing strategy. I really don’t understand this decision,” he told The New Stack. “Sure, a developer-focused conference may look a lot different from Oracle’s annual OpenWorld conference, but for the same reasons, it should also be dramatically simpler and less expensive to run — which can also make it far more accessible to developers. At some point, some Oracle executive is going to ask, ‘What would it cost us now to build a developer community?.’ and the answer will be: ‘A thousand times more than what it would have cost us to nurture the amazing developer community that we had.’”

Omdia’s Shimmin had a similar but a bit more diplomatic take on Oracle’s decision.

“More broadly, however, I feel that this show contraction coupled with the recent Java licensing changes, which were not received very well by the Java community, will make it harder for Oracle to not just build but maintain broad interest in this extremely important language and open source project (OpenJDK),” he said.

Java 21

Meanwhile, the release of Java 21 is on the horizon with several new features minus one. The release has 15 new features however one proposed feature, the Shenandoah garbage collector, was dropped from this release.

“It is a solid release and good to see 15 of the announced 16 features making it. It’s a pity the new garbage collector, Shenandoah didn’t make it,” Mueller said. “Regardless this is a key release for enterprises as it will be supported for five years with premium and another three with extended support, an appropriate timeframe for enterprises to put their next-generation applications on it.  Of the 15 features that made it none really stands out, with a lot of them in preview … Which is again good for enterprises as they can prepare their Java apps for them in future Java releases. ‘Boring’ releases are good as they bring stability to the platform. And it’s another Java release that proves that Oracle is a better steward to Java than the community originally expected and feared.”

Java 21 is long awaited because it is an LTS release, Purdy explained. That means that companies can count on it being supported for years to come. And it has a slew of features that most companies aren’t using yet because they’ve been introduced over the several previous Java releases that were not covered yet by an LTS release, he noted.

“So lots of developers have been actively playing with and even developing with these features, but often waiting for the LTS release to make it all official,” Purdy said. “So I do expect to see a surge of adoption for Java 21. And the Java team doesn’t appear to be slowing down, either — there’s quite a bit more in their development pipeline, and a steady six-month cadence of releases, just one after another.”

LTS is where it’s at, according to Azul’s Ritter. “Many users favor stability over constant evolution, which is why LTS releases are the most common choice for enterprise deployments,” he said. “As an LTS release, adoption for JDK 21 is going to be high, especially given the inclusion of virtual threads. However, most users will likely wait 6-12 months before deploying in production to allow full stabilization of the new features through at least two update cycles.”

Being an LTS release is only applicable to binary distributions of the OpenJDK; those deploying into enterprise, mission-critical environments will typically choose LTS releases as they know they can continue to get maintenance and support for extended periods of time, Ritter noted.

Yet, “In terms of new features, Virtual Threads can provide significant scalability improvements for developers working on applications that use the very common thread-per-request programming model, which covers many server-style applications,” Ritter told The New Stack. “Other notable enhancements include both pattern matching for switch and record patterns becoming full, rather than preview, features. Combined with the inclusion of string templates, this demonstrates the continued evolution of the Java platform to increase developers’ productivity whilst maintaining excellent backward compatibility.”

Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. String templates complement Java’s existing string literals and text blocks by coupling literal text with embedded expressions and template processors to produce specialized results. One goal of this feature is to simplify the writing of Java programs by making it easy to express strings that include values computed at run time.

The pattern matching for switch feature is built to enhance the Java programming language with pattern matching for switch expressions and statements. Extending pattern matching to switch allows an expression to be tested against a number of patterns, each with a specific action so that complex data-oriented queries can be expressed concisely and safely.

And record patterns deconstruct record values. Record patterns and type patterns can be nested to enable a powerful, declarative, and composable form of data navigation and processing. A goal is to extend pattern matching to express more sophisticated, composable data queries.

When Java 21 becomes available, Azul will be releasing a fully compatible distribution, Azul Zulu, that’s free for everyone to use.

“We have many other features that can maximize the performance benefit for all Java users, even those who still look to Java 21 on the horizon,” Ritter said.

The post Java 21 Is Nigh, Whither JavaOne? appeared first on The New Stack.

]]>
IDPs: A Piece of the Developer Experience Puzzle https://thenewstack.io/idps-a-piece-of-the-developer-experience-puzzle/ Tue, 29 Aug 2023 14:42:34 +0000 https://thenewstack.io/?p=22716840

Cloud native software development, thanks to its decentralized nature, can almost appear to be a virtuoso solo act from a

The post IDPs: A Piece of the Developer Experience Puzzle appeared first on The New Stack.

]]>

Cloud native software development, thanks to its decentralized nature, can almost appear to be a virtuoso solo act from a single developer. But to orchestrate the symphony, and take advantage of the promise of the cloud’s speed, agility and flexibility, many organizations invest in internal developer platforms (IDPs) as a piece of the cloud native development puzzle.

Not only are IDPs a way to pave the path, speed up onboarding and reduce complexity, friction and cognitive toil, but these platforms are a piece of a bigger and more fundamental concern: the developer experience.

The Evolution of the Cloud Native Developer Experience

In 2021, Gartner predicted that by 2025, more than 95% of new digital workloads will be cloud native (up from 30% at the time of this prediction). Whether that prediction materializes, cloud native is clearly becoming mainstream. The ability to take advantage of distributed services for rapid scale and flexibility convinced early adopters to evolve from a monolithic, and often slow, approach to software development to a decentralized microservices paradigm.

The cloud native evolution split applications into smaller, more manageable, interconnected pieces, which fundamentally changed the developer experience. This supports the developer’s ability to move faster with less risk but also introduces new complexities and different threats to the mix.

Has the move to cloud native and Kubernetes been a net positive for the developer experience? The answer depends on the organization, its needs and how teams decided to introduce cloud native into their day-to-day work. Some organizations started as cloud native and gave developers control of the full software life cycle, and some developers welcomed that level of control.

However, developers working for other organizations prefer a much clearer delineation of responsibilities and relief from outside-of-scope tasks, such as infrastructure management. Either way, the complexities of Kubernetes and building cloud native apps have led to active discussions about platform engineering and the need for internal platforms to shape and codify the necessary abstractions developers need to build applications.

Assemble Puzzle Edges: Internal Development Platforms

For the 1% developer (early adopters, cloud native natives, for example), an IDP might not seem as needed as in an organization where the 99% developer (the majority) works, but centralizing aspects of what is otherwise quite decentralized can provide benefits, particularly for the nebulous, varied developer experience. An IDP can be a framework, or the edges of the puzzle, helping to create guardrails for developer self-service capabilities. That can both empower developers and contribute to the ultimate goal of speed – of both development and releasing new applications and features to end users.

Let’s assume that an organization is well into its cloud native journey and has evaluated the return on investment, identified the addressable obstacles and understands the expertise needed to build an IDP that suits its developers. Oft-cited obstacles to getting started and being successful with Kubernetes can include everything from operational tasks being shifted to developers, which demands an extended developer skill set, to architectural and infrastructural complexity. Each obstacle diverts developer focus from their core functions and can slow down development productivity which affects the ultimate end goal: shipping software.

At its core, the platform must first give application developers exactly what they need to jump in and remove bottlenecks so they can code and ship their software.

Second, the platform must serve its stated purpose: addressing the needs of the developers and the organization. Will the platform support your organization’s business objective? Organizations at the vanguard of cloud native development can benefit tremendously from creating a self-service IDP.

This approach empowers developers to own the full software life cycle because this is the role they are accustomed to from Day 1. This approach is particularly beneficial for cloud native organizations that don’t have as much, if any, legacy code and application infrastructure to manage.

The reality today, however, is that the vast majority of developers (the 99%), work with multiple kinds of legacy — infrastructure, tools and processes — that make up the creation and operation of applications for stalwart institutions, such as banking, retail, insurance, healthcare and more. Stringent security measures are put in place to safeguard against sophisticated threat vectors as these organizations cannot afford the financial impact and reputational damage associated with a potential breach.

However, cloud native development has made strong inroads. This has largely been possible because development organizations have begun to figure out how to put complex pieces to work effectively for them within the confines of their organizations in the broader cloud native world. In deciding how to adopt cloud native, organizations have identified the keys to developer success and productivity. There are, arguably, two basic keys: one is developer tools, which can potentially be part of an IDP; the second is understanding the need to support the developer experience on a cultural level.

With an IDP, development teams can gain access to the same tools and insights, which contributes to speedier onboarding to shorter time to first commit, from better developer collaboration for faster feedback using productivity and collaboration tools to overall velocity of development and frequency of deployment.

Numerous cloud native tools exist to accomplish all of these things — from both commercial and open source platform solutions and service catalogs to more specific tools that do everything from aid in collaboration to bridging the gap between remote clusters and local machine (another cloud native development challenge).

Selecting the right tools and bundling them in an IDP can make a developer’s job significantly easier, a key part of improving the developer experience. An IDP can also deliver multiple positive effects for organizations. Developer productivity and happiness produce faster product and feature releases. IDPs foster standardization, leading to reduced maintenance costs and more scalable setups.

Ultimately an IDP developed to serve the needs of a given organization can deliver relief to developers who do not necessarily want to become part-time platform engineers, can establish consistency in processes and workflows, and can enable safer, faster software deliveries. But before we get ahead of ourselves, we can’t overlook one bigger-picture consideration when introducing an IDP: culture.

Don’t Forget the Cultural Aspects: Beyond the IDP

Creating and asking developers to use an IDP makes a lot of sense, but only if the cultural piece of the puzzle is considered as well. Bring developers into the IDP planning and development process. Ask what they need to not only make their jobs easier but what will aid in their ability to develop software that meets the expectations of their organizations’ customers. Two-way communication is critical not just to the usability of the IDP, but to the overall developer experience.

The positive developer experience is forged in acknowledging the complexity developers routinely face and actively creating ways to navigate that complexity. This starts with communication and continues with internal developer advocacy – or formal support within an organization for improving the developer experience.

With collaboration, the cloud native community converges around best practices, tools and processes, develops learning and sharing opportunities and eventually elevates its network(s) of experts and practitioners.

Making IDPs Part of Your DX Strategy

Cloud native developers and the organizations in which they work are turning to internal developer platforms to ensure that development teams have what they need to ship software faster and safely. There is no one-size-fits-all IDP, and organizations have a lot of flexibility in designing the right IDP. As with cloud native development itself, creating the right IDP to balance business goals and developer experience will rely on understanding how the pieces will fit together to complete the puzzle.

The post IDPs: A Piece of the Developer Experience Puzzle appeared first on The New Stack.

]]>
Keeping the Lights On: The On-Call Process that Works https://thenewstack.io/keeping-the-lights-on-the-on-call-process-that-works/ Fri, 25 Aug 2023 13:52:49 +0000 https://thenewstack.io/?p=22716527

The On-call process is a touchy subject for a SaaS company. On the one hand, you must have it, because

The post Keeping the Lights On: The On-Call Process that Works appeared first on The New Stack.

]]>

The On-call process is a touchy subject for a SaaS company. On the one hand, you must have it, because your prod server always seems to go down at 2 a.m. on a Saturday. On the other hand, it places a heavy burden on those who must be on call, especially at a small company like Tinybird, where I currently head the engineering team.

I have actively participated in creating the on-call process in three different companies. Two of them worked very well, while the other didn’t. Here, I’m sharing what I’ve learned about making on call successful.

Before a On-Call Process: Stress and Chaos

When I joined Tinybird, we didn’t have an on-call system. We had automated alerts and a good monitoring system, but nobody was responsible for an on-call process or a rotation schedule between employees.

Many young companies like ours don’t want to create a formal on-call process. Many employees justifiably shy away from the pressure and individualized responsibility of being on call. It seems better to just handle issues as a hive.

But in reality, this just creates more stress. If nobody is responsible, everybody is responsible.

In the absence of a formal process, Tinybird relied on proactive employees and mobile notifications for some of our alert channels. In other words, it was disorganized, unstructured and stressful. We had multiple alert channels, constant noise and many alerts that weren’t actionable. If that sounds familiar, it’s because this is typical in most companies.

Obviously, this approach to handling production outages doesn’t scale, and it’s a recipe for poor service and disgruntled customers. We knew we needed a formal on-call structure and rotation, but we wanted to avoid overwhelming our relatively small team (at the time, we had less than 10 engineers).

How It Started: Implementing an On-Call Process

People don’t want to an on-call process. They’re afraid that this on-call experience will look like their last on-call experience, which inevitably sucked. Underneath that fear is insecurity about trying to solve a problem you know little about when nobody is around (or awake) to help. And that burden of responsibility weighs heavily. Sometimes you have to make a decision that can have a big impact. Downtime can be caused by the difference between a 0 and a 1.

Our goal at Tinybird was to assuage those fears and insecurities so that people felt empowered and respected as on-call engineers.

Before we even discussed a process, we outlined some core principles for the on-call system that would provide boundaries and guidance for our implementation.

Core Principles for an On-Call Process

  • On call is not mandatory. Some people, for various reasons, do not want or are not able to be on call. We respect that choice.
  • On call is financially compensated. If you are on call, you get paid for your time and energy.
  • On call is 24/7. We provide a 24/7 service, and we must have somebody actively on call to maintain our SLAs.
  • Minimize noise. Noise makes stress. If alerts aren’t actionable, this stress will cause burnout. Alerts must always be actionable.
  • On call isn’t just for SREs (site reliability engineers). Every engineer should be able to participate. This promotes ownership among all team members and increases everyone’s awareness and understanding of our systems.
  • Every alert should have a runbook. Since anybody from any function could be on call, we wanted to make sure everyone knew what to do even if the issue wasn’t with their code or system.
  • Minimize the amount of time spent on call. Our goal was to only have people be on call once every six weeks. Of course, depending on how many people participate, this may not be achievable, but we set it as a target regardless.
  • Have a backup. Our service-level agreements (SLAs) matter, so we always wanted to have a backup in case our primary on-call personnel were unreachable for whatever reason.
  • Paging someone should be the last resort. Don’t disrupt somebody outside of working hours unless it is absolutely necessary to maintain our SLAs. Additionally, every time an incident occurs, measures should be taken to prevent its recurrence as much as possible.

How We Implemented an On-Call Process

So, here’s how we approached our on-call implementation.

First, we made a list of all our existing alerts. We asked two questions:

  1. Are they understandable? Any of our engineers should be able to see the alert and understand the nature and severity of it very quickly.
  2. Are they actionable? Alerts that aren’t actionable are just noise. Every alert should demand action. That way, there’s no doubt about whether action should be taken when an on-call alert pops into your inbox.

Second, as much as possible, we made alerts measurable, and each one pointed to the corresponding graph in Grafana that described the anomaly.

In addition, we migrated all of our on-call alerts to a single channel. No more hunting down alerts in different places. We used PagerDuty for raising alerts.

Critically, we created a runbook for each alert that describes the steps to follow to assess and (hopefully) fix the underlying issue. With the runbook, engineers feel empowered to solve the problem without having to dig for more context.

For about two months, every Monday, Tinybird’s CTO and I would meet with the platform team to review each alert with the following objectives:

  1. If the alert was not actionable or was a false positive, correct or eliminate it.
  2. If the alert was genuine, analyze it to find a long-term solution and give it the necessary priority.

We also started reviewing each incident report collaboratively with the entire engineering team. Before we implemented this process, we would create incident reports (IRs) and share them internally, but we decided to take it a step further.

Now, each IR is presented to the entire engineering team in an open meeting. We want everyone to understand what happened, how it was resolved and what was affected. We use the meeting to identify action points that can prevent future occurrences, such as improving alerts, changing systems, architecture changes, removing single points of failure, etc. This process not only helped us mitigate future issues but also helped increase ownership and overall knowledge of our code and systems across the entire team. The more people know about the codebase, the more they feel empowered to fix something when they are on call.

Initially, we had just three people on call (two engineers and the CTO). We knew this would be challenging for these three, but it was also a temporary way to assess our new process before we rolled it out to the entire team.

Note that we still made on call mandatory during working hours. Each engineer is expected to take an on-call rotation during a normal shift. This has several benefits:

  1. Increased ownership: Being on call makes you realize the importance of shipping code that is monitored and easily operable. If you know you’re going to be on call to fix something you shipped, you’ll spend more time making sure you know how to operate your code, how to monitor it and how to parse the alerts that get generated.
  2. Knowledge sharing and reduced friction: Being on call can feel scary when you’re alone. But if you’re on call during working hours, you are not alone. For newcomers, this helps them ease into the on-call process without anxiety. They learn how to respond to common alerts, and they also learn that being on call isn’t as noisy or scary as they think.

Every week, when the on-call shift changes, we review the last shift. We use this time to share knowledge and tricks, identify cross-team initiatives necessary to improve the system as a whole and so on.

Finally, anytime a person is the primary person on call overnight, we give them the next day off.

How It’s Going: Where Are We Now?

After about a year of implementing this new on-call process, we have nine people rotating on the primary (24/7) on-call system and six people simultaneously on call during working hours.

It has worked exceptionally well. While I won’t go so far as to say that our engineers enjoy being on call, I think it is fair to say that they feel empowered to handle issues that do arise, and they know they have a forum where they can share difficulties about the on-call system and suggestions for how to improve them.

If you’re interested in hearing more about Tinybird and our on-call system, I’d love to hear from you. Also, if you’re inspired by Tinybird’s on-call culture and think you’d like to work with us, check out our open roles here.

The post Keeping the Lights On: The On-Call Process that Works appeared first on The New Stack.

]]>
Documentation Is More than Your Thinnest Viable Platform https://thenewstack.io/documentation-is-more-than-your-thinnest-viable-platform/ Fri, 25 Aug 2023 10:00:56 +0000 https://thenewstack.io/?p=22716486

As you look to abstract out cross-organizational complexity, team topologies would have you kick off your platform engineering journey with

The post Documentation Is More than Your Thinnest Viable Platform appeared first on The New Stack.

]]>

As you look to abstract out cross-organizational complexity, team topologies would have you kick off your platform engineering journey with the thinnest viable platform. “This ‘TVP’ could be a wiki page,” said co-author Matthew Skelton in the intro video, or a platform could be as simple as documenting key things like: “We use this cloud provider. We only use these services from the cloud provider. Here’s the way we use them.”

This may not be where your internal developer platform (IDP) ends, but, more often than not, documentation is where it should begin. These “docs” record the most common software development processes and can — probably should — include anything and everything from: how to do X at this company, to error codes and definitions, to the context behind why technical decisions were even made. Docs increase self-service and decrease the internal customer support for your platform. Docs explain who does what. Docs allow people to learn about and then onboard to your platform. Docs teach newcomers how to fish. And docs market your platform to your internal customers.

In fact, according to the 2021 State of DevOps, it’s proven that teams with good documentation deliver software faster and more reliably than those with poor documentation. Similarly, GitHub’s 2021 State of the Octoverse uncovered that developers see about a 50% productivity boost when documentation is “up to date, detailed, reliable and comes in different formats.” And, in this year’s State of API Report, more than half of the developers surveyed cited a lack of documentation as the biggest obstacle to consuming APIs — which is the common way to access a platform.

Whether it’s your first step and thinnest viable platform or you hire a technical writing team to really dive deep, documentation is absolutely a critical part of your platform strategy. Read on for everything you need to know before tackling the documentation for your internal developer platform.

Documentation Enables “Stigmergy”

“The best way to build a platform is through documentation,” argued Kristof Van Tomme, cofounder and CEO of Pronovix, a developer portal specialist consultancy that builds open source developer portals. Pronovix also organizes the DevPortal Awards and the API the Docs conference.

“Spotify has this concept of the golden path, which is basically: What is the best way to do X in our company?” Van Tomme offered his definition of an internal developer platform as “creating shared infrastructure that is better maintained and centrally maintained, that facilitates the implementation of certain downstream stuff.”

Of course, the idea of an internal platform isn’t new, it’s just been historically very top-down. “A lot of platform initiatives go wrong,” he continued, when you just tell a platform team to go build a “platform.”

Golden paths to Van Tomme are akin to the natural phenomenon of stigmergy. When a creature like an ant finds food that it brings back to the anthill, it leaves a pheromone trail to signal to other ants where the food was. The more ants that follow that path, the heavier that pheromone trail gets, enabling contextual learning. The queen or queens have a specialized role, but it’s not to be the boss. In ant colonies, the decision processes emerge because of evolved behavioral patterns that, together with information stored in the environment, enable systems and processes that are both adaptive and resilient.

Documentation fulfills a similar role for an organization as a stigmergic signal. And, he argues, a technical writer will be best suited to document these paths and then merge them together in a way that makes things easier for developers.

“I think that the best way of building platforms is where you are trying to get people to agree on things, but not by forcing them to do things in a certain way because that normally doesn’t work, [and] it’s not a very healthy way of doing things,” he explained. If a platform team is going to remove some developer autonomy and choice, then you’d better “make it the most obvious path, make it easier to follow the right path. And that’s through documentation.”

Docs Help Scale Your Platform’s Scope

Van Tomme reflected on a conversation he had, on the API Resilience podcast, with Jabe Bloom, chief sociotechnical officer and founder at Ergonautic. Bloom talks about how there are three economic models that teams can work under that explain how a platform works:

  • Differentiation: create variety and value that customers want to pay for.
  • Scale: decrease variability to increase efficiency of used resources
  • Scope: create reusable building blocks that couple the economies of differentiation and scale.

Most companies only focus on differentiation and scale.

With a platform strategy focused on scope, Bloom said, organizations can unlock reusability and resiliency, which in turn allows teams to offer differentiating value faster, at scale. “Some things cannot be over-consumed and actually increase in value as they’re consumed,” Bloom explained. Orgs must lower barriers to accessing these things, as “ways of unlocking a huge amount of value in assets and resources that your organization already has.”

In fact, Van Tomme explained, these reusable assets then become better from reuse and the overall reduction of variety and complexity. He said, “You’re trying to build these assets that sit in the middle that help you to have certain ways of doing things like certain jobs to be done, that you’ve replaced by like a building block that makes them simpler to do.”

The best way to achieve this scope? Van Tomme says it all starts with technical writers on an anthropological pursuit to discover if there’s already a golden path emerging in your org.

“It’s documenting what people are already doing. So go and investigate what are the common jobs that you’re trying to build a platform about, then document those,” he said. “By documenting it, you’re actually creating that stigmergic path. Because now, when somebody goes looking for, like ‘how to do Kubernetes at my company,’ actually, there’s an article about that.”

The key is that your documentation is discoverable, he continued, especially for onboarding.

“The golden path is this way of thinking where you utilize documentarians to create an easier route to do common, important things. And then you also make some sort of commitment that ‘OK, if you follow this route, then actually you don’t have to do support on your own. We’ll take responsibility for supporting the building block,” Van Tomme said. Then, later, you can extend the platform to include technology to make those repeat processes even easier, as well as to unlock other benefits like compliance and FinOps.

Before hiring a platform engineer — or renaming a sysadmin or DevOps engineer — Van Tomme recommends hiring a technical writer, who not only curates the documentation but does the necessary internal user experience research. This is a stark alternative to the common practice of having only engineers on a platform team, as they often have the habit of thinking they know what’s best for other engineers.

How to Get Started with Your Platform Documentation

While documentation is meant to suit the needs of your particular customers — which, with an internal developer platform, are likely your engineering colleagues — there are certain must-haves:

  • Keep it simple and to the point.
  • A step-by-step getting started guide.
  • Searchable – self-service is key, even if it’s just via command+F to start.
  • Be specific to the user(s).
  • Kept up to date with versioning.
  • Interactive language, like “you.”
  • Filled with examples.
  • Error codes and definitions.
  • Pathways for feedback and even ways to contribute.
  • Clarify what isn’t supported.

Also, remember that for internal documentation, it needs to be useful, but not necessarily polished like externally branded docs would need to be. This makes it quicker and cheaper to do different kinds of knowledge sharing:

  • Written: the most common but not the only way. Again, it must be easily searchable.
  • Video-oriented: demos and feature releases can be welcoming and casual. Do always have subtitles.
  • Generative AI for platform engineering: the price keeps going down, so you soon will be able to leverage generative AI both to create docs and to respond to developer queries in natural language.

“The point is it should allow people to self-serve and the more constructive it is, the better,” Ben Wilcock, technical marketing architect at VMware, told The New Stack. “You also need marketing alongside your platform,” always working to answer the question: “How do you get people to like this thing and respect it and use it and tell other people to use it?”

Your documentation can also act as internal marketing, he continued, including what your platform does, how a feature works, which button to click and which pattern to follow. It can include what he called “creative variabilities” to the most common golden paths.

Documentation can also add context, which will be very different pre- and post-platform. Pre-platform might even be pre-DevOps or still on monolithic architecture, Wilcock reminded us, which makes running and supporting in production very different than, say, containerized and in the cloud. Platform documentation should ease that context switching, making it easier for developers to access deep work.

An area Wilcock said must be documented is the modernization of workloads. “You need a decision tree for modernizing these workloads into the cloud. How do I go about making a decision? Do I containerize? Run it in a special part of that cloud? A decision tree in itself is documentation” and can even include a scoring mechanism to decide where a workload should go.

With a tool like VMware Tanzu Application Platform, he explained, “You can essentially take the essence of something and turn it into a template and offer that template for users to solve particular problems very quickly.” This could include a code generator, plus database access, plus access to other libraries, “gathered together in a recipe book,” including the step-by-step clarity of what’s happening automatically behind the scenes.

You might have a library of ready-made templates in a git repository, which a development team can add to via pull requests, which then the platform team can review, version control and maintain any necessary separation of concerns — and, of course, update the docs. Also remember, architects and developers in different groups may require different standardization around microservices or preferences of different internal libraries.

When looking where to start documenting developer pains, Van Tomme echoes Wilcock by pointing to the popular troublemakers of setting up Kubernetes clusters and creating containers.

In general, as a platform tries to sell colleagues on a certain path, it’s important to include why certain toolchains or workflows were chosen in the first place, both within the documentation and in communicating new platform features. And it’s crucial to gain fans early on in order to uncover platform success stories you can use to onboard more teams.

Just be cautious not to make a garbage list, warned Van Tomme, where you dump everything into your docs — don’t distract devs, make it easy for them to stay on your own golden path.

Innersourcing with an Open Source Mindset

Open source and platform engineering have a lot in common:

  • You can’t (usually) force people to use your technology.
  • You can’t (usually) force people to contribute to your technology.
  • You have limited resources and can’t afford to waste your time providing support, especially for repeat requests.
  • You need to create a positive, self-service experience that makes them want to use it.
  • Documentation is usually both severely lacking and extremely necessary.

Indeed, while the sociotechnical discipline of treating your platform as a product is still rather nascent, there’s a lot of lessons that can be carried over from open source communities.

“A good product is sometimes only good for the person that builds it,” Precious Onyewuchi, a freelance technical writer including for the CHAOSS project, told The New Stack. “You may have made something really important or useful but there’s no way for people to figure out how to use what you’ve built,” without strategic documentation. She continued that platform teams need to offer a “breakdown of what the product or projects are about, so I can have context of what it’s supposed to do for me.”

Another shared challenge in both multinational organizations and open source projects is that teams scale to the point you lose insight into who does what, she reflected. A platform’s documentation shouldn’t just tell the what but the who — with clear points of contact both for the platform team and for anyone else that could be a barrier (even unintentionally) to release.

And never assume you know your audience and its knowledge level, Onyewuchi added. After all, a lot of your developers may have no prior knowledge of infrastructure, containerization or the cloud. Ask for feedback and also explain as much as you can.

“For open source documentation, one of the core problems is that whoever is in charge of documentation, they don’t get to be on the team easily,” and can feel out of the loop, she said. It’s essential that the platform strategy as a whole and the platform docs are in tight feedback loops with your internal developer customers — and quickly reflect changes and new golden paths.

In the end, innersourcing — the act of allowing people within your company to contribute to shared resources — offers a great way to enrich your platform documentation. It just can’t be done willy-nilly. Organization and search-ability are essential, with a product owner, not just of the platform but of the documentation.

Presenting an easy way for colleagues to participate in the creation of your platform — including its documentation — will always encourage adoption of your platform. Everything in platform engineering should center on your colleague user-base.


Want to learn more about platform engineering? Pre-register to receive the forthcoming ebook, “Platform Engineering: What You Need to Know Now,” sponsored by VMware Tanzu.


Free ebook: Platform Engineering: What you need to know now

The post Documentation Is More than Your Thinnest Viable Platform appeared first on The New Stack.

]]>
How TechWorld with Nana Spreads DevOps Skills to Millions https://thenewstack.io/how-techworld-with-nana-spreads-devops-skills-to-millions/ Thu, 24 Aug 2023 17:35:09 +0000 https://thenewstack.io/?p=22716346

Want to learn DevOps? Then you owe it to yourself to check out TechWorld with Nana and let Nana Janashia be

The post How TechWorld with Nana Spreads DevOps Skills to Millions appeared first on The New Stack.

]]>

Want to learn DevOps? Then you owe it to yourself to check out TechWorld with Nana and let Nana Janashia be your guide.

Janashia has taught millions of people how to advance their DevOps skills through her YouTube channel, online DevOps courses, company training and workshops. The TechWorld with Nana event released a special DevSecOps bootcamp this summer. She and I discussed DevOps, its culture and happy chaos in this episode of the New Stack Makers.

For example, the boot camps are six-month courses that start with an overview and finish in the sixth month with learning programming skills, automation with Python, configuration management, and monitoring with Prometheus.

On YouTube, TechWorld with Nana continually uploads new videos covering topics like Kubernetes and programming languages like Python and Go. She helps define and better understand practices such as platform engineering

Janashia’s knowledge of the DevOps world comes through in her work, which started when she was an engineer for a company in Austria. She had started working on Kubernetes. Colleagues began to ask her about it, which led her to think more deeply about how to explain K8’s mysteries. That was her first understanding of how much she enjoyed explaining the topic and taking away people’s fear.

What’s the Current DevOps Career Path?

TechWorld with Nana receives comments from viewers saying that they switched to a DevOps career by watching her YouTube videos. But why should people get into DevOps now? What is the career path that Janashia is seeing?

For her, DevOps is still a relatively new professional role. There are aspects to it that need standardizing. 

“The biggest problem is the confusion between these roles,” Janashia said about how DevOps fits people’s roles. There are software engineers, cloud engineers, site reliability engineers (SREs) — the list goes on.

“And it’s not just the engineers who have a problem defining it, but also the companies themselves, who want to implement DevOps in their companies,” she said.

Engineers move into new companies in the same role, but the workflows and environments are entirely different, as well as the concept of DevOps.

“It’s something different than what you’re used to in your previous job,” Janashia said. “So that creates a lot of confusion.”

The New Stack covers the DevOps space closely. It’s the DevOps culture that drives its ongoing evolution. But listening to Janashia helps provide a bit of space and perspective about its dynamics that makes the chaos of DevOps a bit exciting and fun to be a part of as the community evolves.

Check out the full episode of Makers for more on Janashia and her take on the DevOps world.

The post How TechWorld with Nana Spreads DevOps Skills to Millions appeared first on The New Stack.

]]>
Good-Bye Kris Nóva https://thenewstack.io/good-bye-kris-nova/ Wed, 23 Aug 2023 12:57:36 +0000 https://thenewstack.io/?p=22716453

When anyone middle-aged or younger dies, It’s a cliche that they died much too young. Sometimes, it’s really true, though.

The post Good-Bye Kris Nóva appeared first on The New Stack.

]]>

When anyone middle-aged or younger dies, It’s a cliche that they died much too young. Sometimes, it’s really true, though. Someone dies who’s a true, innovative leader who was changing the world for the better. Such a person was Kris Nóva.

I can’t claim to have known Nóva well, but she impressed me. Most people who’d met her would agree. Her job title when she died from a climbing accident was GitHub Principal Engineer. But, she was far more than that.

Not even 40, Nóva had co-founded The Nivenly Foundation. This organization is a member-controlled and democratically-governed open source foundation. Its goal is sought to bring sustainability, autonomy, and control to open source projects and communities. Specifically, it governs the popular tech Mastodon site, Hachyderm Decentralized Social Media, and the Aurae Runtime Project. The latter is a Kubernetes node workload management program.

Kris Nóva and Alex Williams

Many people claim to be “thought leaders.” Only a handful really are. Nóva was one. Her Kubernetes clusterf*ck talks were famous for revealing what’s what with Kubernetes and security. She also co-authored Cloud Native Infrastructure, a must-read for anyone considering running cloud native architectures.

Nóva also authored Hacking Capitalism, a book modeling the tech industry as a system. This book is interesting for anyone who wants to know how tech works.  It’s specifically for marginalized technologists who need tools to navigate the tech business. You should read this if you’re a programmer or engineer constantly flustered by tech’s management, social, and business sides. It will give you the insight you need on how investors, top leadership, and entrepreneurs view our ruthless, but predictable, industry.

She wasn’t just a speaker and writer, though. She was also an open source developer who contributed significantly to Linux, Kubernetes, distributed runtime environments, Falco, and the Go programming language. Altogether, she had created 388 GitHub repositories. In a word, she was “impressive.”

As Josh Berkus, Red Hat’s Kubernetes Manager, said on Mastodon, We lost one of the leading lights of tech this week. Relentlessly driven, astonishingly brilliant, and one of the bravest people I ever met, Kris Nóva was both an inspiration and a friend to dozens, if not hundreds, of people (including me). While it is fitting that she should have left us doing what she always did — taking risks — we are all poorer for having lost her.”

Indeed, we are.

The post Good-Bye Kris Nóva appeared first on The New Stack.

]]>
Tech Works: How to Fill the 27 Million AI Engineer Gap https://thenewstack.io/tech-works-how-to-fill-the-27-million-ai-engineer-gap/ Fri, 18 Aug 2023 12:00:33 +0000 https://thenewstack.io/?p=22715440

Around the globe, there are only about 150,000 machine learning engineers — a small fraction of the world’s 29 million

The post Tech Works: How to Fill the 27 Million AI Engineer Gap appeared first on The New Stack.

]]>

Around the globe, there are only about 150,000 machine learning engineers — a small fraction of the world’s 29 million software engineers.

Yet AI is driving a growing demand for large language model (LLM) developers that’s already tough to fulfill. External factors like global chip shortages and the current limits of technology mean the most in-demand skill sets will heavily vary from short term to long term — which in this new AI age can be just a few months. That’s why U.S.-based AI engineering job listings boast six-figure salaries.

The best opportunity to start to close this gap quickly is in retraining technologists.

So how do organizations help turn software engineers into AI developers?

For this installment of Tech Works, I talked to Mikayel Harutyunyan, head of marketing at Activeloop, which helps connect data to machine learning models, about the impact of AI on developer experience and the journey of prompt engineers, data scientists and LLM developers.

Prompt Engineers: A Short-Term Solution

Since the engineering mindset is inherently scientific, it’s no surprise that most of your engineering team is already experimenting with AI. Whether you’ve asked them to or not, they’re likely pair programming with GitHub’s Copilot and ChatGPT. (It’s important to note the recent revelation that while they seem very convincing, ChatGPT’s code suggestions are wrong more than half the time.)

It’s only logical that the next step in the developing AI market is to become a prompt engineer.

In AI, a prompt is any information, like questions and answers, that communicates to the artificial intelligence what response you’re looking for. Therefore a prompt engineer is tasked with:

  • Understanding the limitations of a model.
  • Designing a prompt in natural language.
  • Evaluating performance.
  • Refining when necessary.
  • Deploying over internal data.

A common current use case is a customer service chatbot. A prompt engineer needs an understanding of not only the model but the end user or customer.

But Harutyunyan predicted that this prompt engineer is more of a stopgap role reflecting current AI limitations — soon, AI models will likely do this better than humans, even reading and reacting to emotions like frustration.

In the next year or so, prompts that combine images and text will likely also be able to be translated by generative AI. Think of the opportunity to evaluate if a car accident insurance claim is valid by a written description and a few photos of the damage.

As chatbot tooling becomes more autonomous and less technical, prompt engineers will become the subject matter experts. Once customer support representatives get a repeated query, they will automatically feed the question and answer into a machine learning tool, so the chatbot answers that question next time.

It makes sense to remove the developer from the loop in order to bring the machine-learning model closer to what a specific industry or organization requires. After all, a building manager knows their building better than an off-site developer and will soon be better equipped to tweak the model that’s communicating with the HVAC and security cameras.

But until that evolution of the prompt engineer role, Harutyunyan said, the job requires more empathy for how your users think and speak. “The people will be writing this or that, and I need to make sure my model expects them to write this,” he noted, including slang, abbreviations, emojis and more.

Improv classes and pairing up engineers with your customer support representatives are two ways to build this empathy and verbal versatility quickly. Or you could offer technical training to the customer reps, who likely have that empathy and client perspective already.

And don’t worry, even if the prompt engineer role only lasts a year or two, empathy is an always in-demand skill for a software engineer.

Poll: What job role will grow the most in the near term due to increased use of LLMs and generative AI. AI engineering is being posited as a new profession that will surpass many existing job roles.Results: AI Engineer - 35% ML engineer - 13% MP Ops engineer - 12% Data engineer - 10% Full stack engineer - 19% Other - 10%

The New Stack VoxPop Results: 214 people responded from August 21 through August 29, 2023 to which technical role they think will be most in demand in the short term, as AI becomes an increasing part of software engineering workflows.

The Skills an AI Engineer Needs

It would be rare indeed to find AI engineering candidates who tick all the boxes, but there are certain technical and core skills that make you a better candidate than most. Harutyunyan lumped them into machine learning engineering and more LLM engineering skills.

Machine Learning Skills: Python and More

The open source programming language Python reigns supreme in machine learning. Even more so now that Facebook advocated for a very technical change in Python, which Harutyunyan said makes it much more suitable for LLM training. The global interpreter lock, or GIL, allows only one thread at a time, so removing that lock allows for multi-threaded processes, which in turn speeds up training.

The vast majority of software engineers have at least some familiarity with Python, but many lack other machine learning fundamentals, including statistics. Developers need to brush up on basic statistics, Harutyunyan said, as well as machine learning fundamentals like:

  • The differences between supervised and unsupervised learning.
  • What is bias in machine learning and how to remove it. (Especially in private data.)
  • How to evaluate machine learning models.

Alongside Python, be sure to learn about the LangChain framework for developing apps for large language models. Also, dive into vector databases for long-term memory for AI.

LLM Skills: the Transformer Model and More

Harutyunyan placed large language models more in the “deep learning skills” bucket, as it’s a still nascent topic that has been rather locked up in academia.

To kick off your LLM journey, he recommended learning about the Transformer machine-learning model. He compared it to a mystery novel where you collect clues page by page to identify the culprit.

“A Transformer model, it kind of takes a look at all the pages of the book at once and then cross-references the clues and says ‘OK, this is the probability of the next word,’ or whatever it is.”

This model, used for predominantly text data, Harutyunyan said, “helps to make sure that you understand some relationships and patterns that are spread out over very long distances within the data.”

Then, the Transformer attention mechanism allows you to assign greater importance to different outputs and other information.

Harutyunyan and many data scientists also recommend reading the seminal paper by Cornell University researchers, “Attention Is All You Need.”

If you’ve thus far missed reading the research paper, he added, that’s OK. “If you’re learning to drive a car, you don’t really need to read more about the first car ever made and how it was built,” he said. “This is what is so special about what’s happening right now.”

Many software engineers are simply jumping into the driver’s seat and connecting the LLM API to their data that are stored in a database for AI, Harutyunyan noted, “and they are building a demo that actually works.”

But, he added, an understanding of the fundamentals will give you an advantage: “That layer will get commoditized very, very quickly because everybody will be able to connect an API for the large language model to their data and build a generic app with a simple UI for a certain use case.”

Throughout this learning process, continue to learn how the LLM was trained — think natural language processing — and why your model is not working.

Once you’ve taken these steps, Harutyunyan said it’s time to learn about the data flywheel, where you productize data, increasing the speed of end-to-end value from private data. This real-time data and model runs in production, constantly feeding back changes and improvements, such as analyzing why a sale was successful or not.

He recommended checking out the popular deep-dive, step-by-step explainer videos for AI beginners created by Andrej Karpathy, formerly of Tesla and OpenAI.

Once you’re in production, you can then leverage knowledge retriever architecture for LLMs. This takes data across existing sources like Slack, email or customer chat, and understands how to store your data so that the responses to your questions will be relevant. This is more important when you don’t want to pay to store less relevant data and responses.

Core Skills: Language Paired with Engineering

Just like a DevOps team with different skills is more set up for success than a single full-stack developer, pairing or teaming up engineers — from frontend to backend to machine learning — and subject domain experts will accelerate your organization’s AI growth.

Contrary to the rumor that generative AI is stealing jobs from journalists, linguistic skills are more in demand than ever.

“What I’m seeing is that nontechnical [people] like myself can very often get better outputs from the LLM than technical people,” Harutyunyan said.

He’s found that pairing with his developer colleagues to create queries made for improved prompts and results.

“Engineers are known to be very object-oriented. So they’re like: X does Y, and then from Y goes Z,” he said. “Maybe what you also need to be is a bit more linguistically endowed and to be able to explain in better words — if you have this use case, you’re acting as this person.”

He noted that the University of California, Berkeley’s new College of Computing, Data Science, and Society was recently established, in part, to focus on the inclusion of human-centric skills in AI.

The Global Chip Shortage Demands Efficiency

All the money in the world can’t buy what doesn’t exist. Anyone who has tried recently to buy a car — or a cell phone or video game console — has been hit by the ongoing microchip supply chain crisis. There simply isn’t enough compute to go around. And large language models devour hundreds of terabytes of data, which increases as an LLM model grows.

“In our current paradigm, where computing is the constraint and not software talent, product leaders must redefine how they prioritize various products or features, bringing GPU limitations to the forefront of strategic decision-making,” Prerak Garg, a tech and strategy adviser, recently wrote in HackerNoon.

To help organizations make decisions about LLM training, he offered product leaders a GPU prioritization framework.

The first target audience to upskill for working with LLMs is the classic machine learning engineer, who can already train smaller models and can adapt those skills to the scale of large language models.

Such engineers need significantly more knowledge of how to store data and databases for AI, Harutyunyan said, and an understanding of the unique way to package data in order to train these exponentially larger scale models, more efficiently at a lower cost. This includes tabular, non-tabular and raw data, he said, like images that need to be labeled correctly.

Add to this a foundation of MLOps in order to train and deploy it, and you’ve got the complex LLM developer job description.

LLM developers who can optimize for compute are in high demand. Harutyunyan and his colleagues contend that CPUs are better than GPUs for fine-tuning LLMs for cost efficiency, particularly when GPUs are scarce.

But if you can optimize for very domain-specific performance, Harutyunyan reckoned you could cut that cost dramatically down via fine-tuning of models. It’s also important to note that an emphasis on compute efficiency always translates to an exponentially smaller environmental impact.

Because the field of LLM development is just starting to gain momentum, training programs for technologists are relatively scarce. However, Activeloop launched Gen AI 360: Foundational Model Certification, a free program, in June, in collaboration with TowardsAI and the Intel Disruptor Initiative.

Its course on LangChain, vector databases and foundational models, has already been taken by more than 10,000 senior-level developers and managers worldwide, according to Activeloop.

A subsequent certifications program on training and fine-tuning LLMs will launch in September, with a program focused on deep learning across business verticals slated to start in October or November.


Got an idea for a topic that Tech Works should explore? Send a message to @TheNewStack or @JKRiggins.

The post Tech Works: How to Fill the 27 Million AI Engineer Gap appeared first on The New Stack.

]]>
How Google Unlocks and Measures Developer Productivity https://thenewstack.io/how-google-unlocks-and-measures-developer-productivity/ Thu, 17 Aug 2023 10:00:35 +0000 https://thenewstack.io/?p=22715946

The time of rapid growth is on hold, leaving engineering teams trying to do more with less. Tech giant Google

The post How Google Unlocks and Measures Developer Productivity appeared first on The New Stack.

]]>

The time of rapid growth is on hold, leaving engineering teams trying to do more with less. Tech giant Google isn’t immune to this after laying off 6% of its staff last January. And no matter where you are, tighter customer budgets are driving greater demand to release differentiating features faster.

Unlocking productivity for one of software development’s biggest expenses — the humans making it — is more important than ever.

Developer productivity research measures an engineer’s ability to produce a certain amount of work in a given time. This discipline studies not only the end result but what socio-technical factors influence it. More and more, it also attempts to measure developer experience, as it’s proven that DevEx drives productivity.

After all, software development is first and foremost creative work, meaning any effort to improve developer productivity should focus on both human-to-computer and human-to-human interaction among people, processes and technology. Which is harder than you think, as the human experience is rarely multiple-choice.

Developer productivity research is also a nascent topic as developer experience in general tends to be hard to measure.

In a recent episode of the Engineering Enablement podcast, host Abi Noda interviewed Ciera Jaspan and Collin Green, who together lead the engineering productivity research team at Google. At Google, engineering productivity across tens of thousands of engineers comes down to “delivering frictionless engineering and excellent products.”

In this post, we reflect on the latest research and lessons from the engineers, user experience (UX) researchers and psychologists that look to measure and enhance the developer experience and productivity at Google.

The Set-up: Who’s on the Team

Google’s engineering productivity team has about 2,000 engineers, mostly focused on making developer tools and processes more effective. Within, there’s a much smaller team that focuses on engineering productivity research — not necessarily the how, but more the why, when, what and how much.

It’s a mixed-method team that does both quantitative and qualitative research. It also is a mixed team of about half engineers and half user experience researchers, with folks who’ve previously worked as behavioral economists, social psychologists, industrial-organizational psychologists, and even someone from public health.

The social sciences background, Jaspan said, provides the necessary context. Logs analysis — a common starting point for developer productivity research — only provides part of the picture. “It tells you what developers are doing. But it doesn’t tell you why they’re doing that. It doesn’t tell you how they feel about it, [or] if what they’re doing is good or bad. It doesn’t tell you if there’s room for improvement. It only gives you a number, but you can’t interpret that number,” she said on the podcast. “Unless you have more of the qualitative side of the world, and you understand the behaviors and how those behaviors change over time, depending upon how you change the context.”

This is why the productivity research team hired their first UX researcher about five years ago to help design better surveys. Then, by pairing the UX folks with engineers, they are able to optimize not just what they were asking but the when and how. For example, this pairing enabled experience sampling, integrating surveys at the moment developers are running a build. The engineers can help provide both firsthand experience and technical solutions that scale UX research.

“The direct access to subject matter experts who are way deep in it and who are at the top of their field is a really powerful augmentation to have in this quiver of arrows that is behavioral research methods,” Green said. “The domain expertise, the scalability, and the technical skills from the engineering side, combined with the wide variety of behavioral research methods and a facility accounting for things like bias, and the way people work, and what to watch out for in survey responses or interviews,” from the social scientists combine for UX research in a way that may be unique to Google. The UX folks have uncovered nonresponse bias and the engineers have discovered upstream bugs because things simply didn’t look right.

Developer Productivity Is an Org-Wide Goal

This team’s first customer is the first-party developer team which builds the developer tooling for the whole org. The goal is to help them make improvements to infrastructure tooling, processes and best practices.

“When they want to, for example, understand what makes developers productive and what could make them more productive, our data [and] our research is one of the places they go to understand how to even measure that,” Green said.

The productivity research team also collaborates with other teams including operations, real estate and workspaces, corporate engineering — who create tools for all Googlers, not just engineers — and other teams that can effect the overall developer experience. And then, of course, the learnings from developer productivity could benefit other non-technical teams. So long as cross-company communication ensues.

“So when you focus on engineering productivity, you’re focusing on a big chunk of the Google population and so there’s wide interest in what we find,” Green said.

The Google engineering productivity team also acts as a conduit among different dev teams. As Jaspan said, “The company’s really big. People are doing different types of development. The people building the tools may not know about all the different types of work being done.”

All this makes for what Green calls a “playground of well-formed data” paired with engineers who have real experience with the problems at hand.

Speed, Ease and Quality Drive Productivity

So, if you had Google’s engineering budget, what would you measure?

With the rise of platform engineering and the consolidation of cross-organizational tooling, it’s become easier to track the technical developer experience. What’s still challenging is the effect of that technology on its human users and the effect of the people and processes around that experience. No single measurement could begin to capture that.

The developer productivity research team, Jaspan said, upholds a philosophy: There is no single metric that’s going to get you developer productivity. From here, she explained, the team triangulates across three intersecting axes:

  • Speed
  • Ease
  • Quality

For example, Green once proposed – tongue in cheek, to make a point – that the quickest way to improve productivity would be to remove code reviews — which of course everyone resisted because, while it’d increase speed and ease of release, it’d decrease quality. And the team’s research has proven that code quality improves developer productivity.

For speed, they do measure logs, but they also measure engineers’ perception of how fast they think they’re going, as well as diary studies and interviews. Jaspan said, “It is both using multiple measures, but also making sure that they’re validated against each other.”

Mixed-Method Research Validates Data

To have a deeper study of Google’s software development behavior, the team performed a cross-tool logs study, ingesting logs from multiple developer tools. They also performed a diary study, in which, every few minutes, engineers wrote down what they were doing. They compared the two in order to create confidence in the data logs. Since each engineer works and perceives their work differently, it can become an apples-and-oranges situation, so they apply what’s called interrater reliability to calculate the agreement between the two studies.

“We assume there is some truth out there that we can’t directly observe without like sitting next to the developer and probably bothering them,” Green said. “And so we take these two sources and we say: Are these two lenses telling us about the same world?”

The data log study can be performed at scale passively, without having to bug engineers at all, while the diary studies can only be done by up to 50 engineers at a time — and it has the possibility to become annoying.

“Once we’ve sort of found good evidence that we’re getting the same information from the two sources, then we can like lean into the scalable method,” he explained.

Technical Debt and the Engineering Satisfaction Survey

Since 2018, another powerful measuring tool at Google has been the quarterly engineering satisfaction survey, which goes out to about a third of the engineering force at a time. Green admitted that executives were reticent about this measurement at first because it’s “just people’s opinions.” During the pandemic lockdowns of 2020, the survey first revealed an uptick in productivity, followed by a big dip the next quarter, as time at home often alone continued.

It’s proven that technical debt negatively affects developer morale, as well as decreases development speed, so it’s not surprising that, early on, the survey featured two questions on the impact of technical debt on productivity:

  • What are the underlying causes of technical debt that you encounter?
  • What mitigations would be appropriate to fix this technical debt?

Over the years, in response, Jaspan and Green’s team combined responses until they settled on 10 categories of technical debt that could be hindering engineering productivity:

  • Migration is needed or in progress.
  • Documentation on project and/or APIs is hard to find, missing or incomplete.
  • Poor test quality or coverage.
  • Code quality is not well-designed.
  • Dead and/or abandoned code has not been removed.
  • The codebase has degraded or has not kept up with changing standards.
  • A team lacks necessary expertise.
  • Dependencies are unstable, rapidly changing, or trigger rollbacks.
  • Migration was poorly executed or abandoned, maybe resulting in maintaining two versions.
  • Release process needs to be updated, migrated, or maintained.

Engineers can choose any or all options. The resulting data has uncovered differing technical debt interventions needed for different audiences like machine learning engineers versus backend engineers. They also slice the data along organizational lines to show and compare progress in conquering this debt.

The paper on this technical debt question acknowledges that survey-based measures are a lagging indicator — it only emerges as a real problem when that technical debt has become severe enough to hinder engineers. However, after exploring 117 metrics, the Google team has yet to identify and predict when technical debt will soon hinder productivity.

They’ve also added four questions on how teams are managing debt, as they look for continuous improvement.

As this survey became more important to the organization as a whole, engineering VPs started requesting their own questions. That was helpful for a while but then the survey had to be streamlined back down. Now, a different UX researcher is in charge of the survey each quarter with the support of a different engineer, alongside team feedback. Green admitted the survey is still rather “hefty.”

No matter what the size (and budget) of your organization, you are encouraged to invest in a mix of automated and measurable, and observational and experiential research to understand your developer experience and the productivity it supports or hinders.

Just remember that the metrics will change as your teams and your code changes. As Jaspan said, “We know there’s not a single metric for developer productivity, so we try to use all these different research methods to see: are they all aligned? Are they telling us the same thing is happening? Or are they misaligned? In which case we need to dig deeper to figure out what’s going on.”

Free ebook: Platform Engineering: What you need to know now

The post How Google Unlocks and Measures Developer Productivity appeared first on The New Stack.

]]>
For Games about Civics, US Library of Congress Promises Prizes https://thenewstack.io/for-games-about-civics-u-s-library-of-congress-promises-prizes/ Sun, 13 Aug 2023 13:00:22 +0000 https://thenewstack.io/?p=22715030

It’s “the largest collection of human knowledge ever assembled,” according to the official website for America’s Library of Congress —

The post For Games about Civics, US Library of Congress Promises Prizes appeared first on The New Stack.

]]>

It’s “the largest collection of human knowledge ever assembled,” according to the official website for America’s Library of Congress — the largest library in the world. Today its massive operation includes 3,172 permanent staffers (with a total budget authority of $838.9 million), overseeing more than 170 million items, and adding more than 10,000 new items each day.

Yet as one webpage puts it, “It is not enough to collect and preserve. To be successful, collections must be used…”

So America’s Library of Congress is now hosting a contest to create video games that “improve public knowledge of civics” while featuring the library’s resources. And it’ll award $35,000 in prizes. ($20,000 goes to the first-place winner, with $10,000 and $5,000 prizes for second and third place).

Robert Brammer, chief of the Library’s external relations, has been doing the work of organizing the challenge. “We’ve received a lot of positive feedback about the Challenge,” Brammer said in an email interview, “particularly from students, educators, librarians, and video game developers!”

“Think Oregon Trail, Flappy Bird, or Candy Crush,” explain the official rules, “but with educational content that teaches lessons about civics and incorporates Library of Congress resources.”

The 1971 game Oregon Trail seems to be a particular inspiration. In announcing the contest, Brammer remembered that “People wear t-shirts with its graphics, and it’s a reminder of how fun learning can be in the right context.”

In our email interview, Brammer said that “It seems that a lot of people have good memories of playing simple, engaging educational games like The Oregon Trail, and are interested in creating a similar kind of game that makes learning about civics fun…”

“We hope this challenge inspires game developers to create fun, lightweight video games in the spirit of Oregon Trail that improve public knowledge of civics and incorporate Library of Congress resources to educate and entertain today’s students.”

Interior of Library of Congress Jefferson building - via Wikipedia

Enjoying and Learning

The event’s organizers already received lots of questions about the contest, Brammer says. “And the interesting thing is that the questions originate from a wide variety of people, ranging from people who have experience developing classic 8-bit video games to students who are involved in a coding club.”

You can almost sense the enthusiasm. (“You may submit more than one entry,” according to the official rules. “However, each entry must be unique…”) Game-makers just need to submit an entry by Nov. 27 — the Monday after Thanksgiving weekend — that’s “playable in a modern web browser.” (Along with an essay describing the entry in 500 words or less.) The only other rule is that games also need to work with screen readers and be accessible to disabled individuals (with a reference guide for people using assistive technologies). Entrants under the age of 18 need signed consent from a parent or guardian.

Brammer says five finalists will first be chosen by a panel of fair and impartial judges, “drawn from a variety of service units from across the Library of Congress.

“We were fortunate to secure judges with a variety of backgrounds, such as education, technology, librarianship, and law.”

And then the Library of Congress staff selects the three winning games…

“It’s often said on both sides of the aisle that the state of civics knowledge is in crisis,” Brammer said last December (while submitting it for funding from the Friends of the Library of Congress).

Brammer believed the contest could help address the issue. “In addition to the cash prizes, we may invite the winners of the competition to Washington, D.C., to present their work,” explain the official rules. (“Subject to our own discretion and the availability of funds, we may provide some financial assistance to help with your travel expenses.”)

And Brammer’s proposal also called for winners honored in a public ceremony, followed by the hosting of their games on the Library of Congress website, “for use by the American public.”

Inspirations

In our email interview, Brammer was already anticipating the impact that could be made by the games. “We hope that people of all ages will play these games when they are placed online, particularly students,” Brammer wrote, “and that our patrons will enjoy themselves while learning more about civics.”

The FAQ clarifies that the games aren’t meant to address complex philosophical questions, and provides some simpler example topics (like the Constitution, the Bill of Rights, or the three branches of the U.S. government). But the Library of Congress is filled with centuries of cultural treasures that could serve as thought-provoking resources for a civics-themed video game.

Its holdings include part of Thomas Jefferson’s first draft of the Declaration of Independence and a photo of the crowd at the inauguration of Abraham Lincoln. An audio collection of man-on-the-street interviews from 1941 just three days after an attack on Pearl Harbor

But that’s just the beginning, and the contest’s guidelines remind participants of the depth of the library’s collections, and its many, many resources:

1853 image of the Library of Congress interior (via Wikipedia)

Challenge.gov

Interestingly, the announcement appeared at Challenge.gov, “the official hub for challenges and prize competitions across the U.S. federal government.”

The site’s goal is helping federal agencies “mature and scale the use of prize competitions in order to advance their missions…” according to its About page. It accomplishes this goal “by offering advanced infrastructure…, hosting interactive learning experiences, and developing practical toolkits” — but also by “empowering members of the Challenge and Prize Community of Practice.” (Defined elsewhere as a “thriving inter-agency community of over 800 dedicated and passionate civil servants encouraging innovation in government” that “strives to tap into the public brain trust to help government solve complex problems.”)

Other contests include NASA’s Watts on the Moon, a $4.5 million contest, seeking “innovative engineering” to power future moon missions. “Since 2010, the U.S. government has run over 1,200 prize competitions,” explains the site, “engaging public solvers ranging from students and hobbyists to small business owners and academic researchers.”

But maybe this one will be especially far-reaching. In a statement at the start of its four-year strategic plan in 2019, Librarian of Congress Carla Hayden listed specific goals to “elevate digital experiences” to enhance “discoverability” and “develop content in a variety of formats and media to enhance the usability and accessibility of the Library’s collections.”

But Hayden also spoke of “expanding the Library’s reach and deepening our impact,” adding “I can’t wait to see the many thousands of sparks we ignite.

“Maybe one of them will be yours.”

The post For Games about Civics, US Library of Congress Promises Prizes appeared first on The New Stack.

]]>
Entrepreneurship for Engineers: Selling Open Source Software https://thenewstack.io/entrepreneurship-for-engineers-selling-open-source-software/ Fri, 11 Aug 2023 12:07:56 +0000 https://thenewstack.io/?p=22715290

No matter what kind of company you intend to build — open source or proprietary, DevTool or not — at

The post Entrepreneurship for Engineers: Selling Open Source Software appeared first on The New Stack.

]]>

No matter what kind of company you intend to build — open source or proprietary, DevTool or not — at least one of the company founders will have to close deals at the beginning of the company’s life. Even as the company gets larger, founders still need to be involved in sales, especially big deals.

For founders of open source startups (and the sales teams they eventually hire), what is unique about sales when there’s a free alternative your company is also promoting?

I was inspired to delve into this topic after seeing Nicholas Erdenberger, chief revenue officer at dbt Labs, talk on selling free software at HeavyBit’s DevGuild conference. But I’ve also spoken with experienced salespeople in the open source ecosystem to get their perspectives as well. Here’s what I’ve learned.

First, the Basics

Sales are obviously critical to any company’s success, but it’s also not the first thing you do as a company. Once you think about sales as the process of ushering a deal over the finish line, this becomes clearer.

“I’m going to talk about basics and fundamentals, that if you don’t screw up, you will be successful, Erdenberger said at the beginning of his talk at DevGuild. “And that a lot of people do screw up, so we should probably focus on them.”

Create an open source project people love. “This sounds really obvious, but I see people do this all the time: They hire salespeople to help develop the open source project, or help fund the open source project,” Erdenberger said. “This is a really bad idea,”

Made a commitment to commercialization. “There are a lot of technical founders in here who love the open source thing that they built and have to be ready to make trade-offs between prioritizing that open source and that community that you love, and prioritizing building a software business,” Erdenberger said.

Have a framework for the open source versus the paid product. This means a rationale, not a feature list, that can be shared and understood externally and internally, with your customers, community and team. If you have a list of 20 features, it should be easy for all of those stakeholders to see which belong to open source and which belong to paid.

Build a working commercial product. “It doesn’t have to be awesome,” Erdenberger said. But it does have to do what you say it does and provide a value that customers are willing to pay for.

You have to be realistic about where your product is now, versus what kind of customers you are chasing, added Lee Wright, vice president of sales at Quix, a data platform company. If you are a seed-stage company with zero compliance certifications, talking about how to get into multinational banks is just a waste of time.

“I’m always saying that salespeople are not magicians,” said Wright. “What salespeople categorically do not do is generate demand.

As an open source company, you have to make sure you have the basics in place to drive adoption of your project and to generate leads before you think about hiring salespeople. And even if you’re still at the founder sales stage, you need to have all of these fundamentals in place to be successful there, too.

Sales Tactics

When you’re selling for an open source business, Wright said, you have basically three levers to pull. You can create net new users, convert open source users to paid customers, and expand existing accounts.

Account expansion isn’t much different from a proprietary software sales situation, but the other two can be — especially the process of converting an open source user to paid.

“The most important thing to know as a seller is 99% of people who use your open source project are never going to pay you a penny,” Wright said.

As a seller — and as a founder — you have to be comfortable with this.

“Salespeople who come from a non-open source software background, at first they get annoyed with customers who want to do everything themselves for free,” said Reg Deraed, continental Europe field sales director at Canonical. (And he admits to feeling the same way when he first started working at Canonical.)

But now, he sees every enterprise that uses his company’s product Ubuntu as a win, even if they don’t pay.

For an open source user to convert to a paid customer, Wright said, one of three things has to happen: They are in production and there’s a major incident, the person responsible for operating the software leaves, or there are changes to their enterprise platform requirements.

“If there’s no change, you don’t have a buying trigger,” he said, and they’ll never pay.

But the trick is you want to make sure your phone number is the first one a user thinks of calling if they ever do experience a buying trigger. That means staying in touch with them and being useful, not pushy or sales-y, on a regular basis.

For net new users, the sales cycle also isn’t dramatically different from any other software sales, Wright and Deraed agreed. “You need a [proof of concept], need to have proof of a business case, etc.,” Wright said.

Deraed said he explains it to new team members who come from a proprietary software sales background as like selling a traditional software license plus support contract — except that there’s no license in this case. The difference is you might have net new users who ultimately decide that your software is awesome — but they’ll be fine with the open source project.

Founder Sales and Embracing Rejection

Deraed echoed Erdenberger’s notion that CEOs have to do sales at the beginning — and that is true of all startups, not just open source companies.

“If the CEO doesn’t know why customers buy the product, the sales team won’t either,” Deraed said.

Wright had two specific pieces of advice for founders. The first is embracing rejection.

“You will lose 99% of deals,” he said. “Founders I’ve met have really struggled with this. At every stage of the funnel, you’re going to have about a 70% drop off.”

The second is that while you, as a founder, are thinking about your own company all day, every day, your customers aren’t. You’re just one thing on their massive to-do list, and you need to have patience as a result.

The true art of sales, Wright said, has nothing to do with whether or not your product is open source: “It’s literally saying, What is it you’re trying to achieve, and by when? And yes, I can help you do that.”

The post Entrepreneurship for Engineers: Selling Open Source Software appeared first on The New Stack.

]]>
Four Ways to Win Executive Buy-In for Automation https://thenewstack.io/four-ways-to-win-executive-buy-in-for-automation/ Thu, 27 Jul 2023 14:40:05 +0000 https://thenewstack.io/?p=22714147

Automation can have a positive impact on today’s organizations. Anything from streamlining software deployments to automatically remediating incidents can save

The post Four Ways to Win Executive Buy-In for Automation appeared first on The New Stack.

]]>

Automation can have a positive impact on today’s organizations. Anything from streamlining software deployments to automatically remediating incidents can save costs, accelerate value and improve output. Demonstrating both current and future value to executives is a vital prerequisite for securing funding for such projects. But how do you do it?

Although the value of automation can vary, there are some basic, repeatable steps which should help to get projects off the ground. Gather baseline measures, determine the right metrics, and then package them to demonstrate return on investment (ROI) and business value. Here’s how.

Working Out a Baseline

Automating incident resolution and service requests can reduce labor costs and waiting time by up to 99%. But demonstrating value requires reporting these savings in the context of complex business workflows — sequences of human and machine-based activities. Begin by creating a baseline of the organization’s “as is” state, related to relevant key performance indicators (KPIs) for business processes and departmental functions. These will help to track the value generated by any automation project. Next, collect more detailed statistics on the workflows earmarked for automation.

For IT operations, KPIs should focus on meeting customer and internal service-level agreements (SLAs), responsiveness and cost. For IT service and incident resolution workflows, these could include mean time to completion, mean time to resolve, processing time and cost, incident response productivity and workflow productivity.

With these metrics captured, the organization can then calculate process productivity — that is, total benefit produced by a workflow over a period of time, like incidents closed per month. And they can help to compute process efficiency — total benefit produced by a workflow per person over a given time, like incidents closed per responder per month.

Four Ways to Measure the Value of Automation

Now comes the really important bit. How can this be reported to the broader business. Consider the following ways to help measure and report the value of automation projects:

  1. Value Per Automation Run

This is the simplest model for calculating the business value of automation, which assumes value is generated whenever automation runs. Consider a data-transfer job that may take a single staff member a quarter of their work hours per week without automation. From that “as is” process, they might switch to a “to be” process that takes zero human time per week, thanks to full automation. You would report that labor savings per automation run as the generated business value.

  1. More Complex Workflows

It’s important to remember that automated tasks don’t exist in a vacuum. They are part of a business workflow, with inputs, outputs and even possibly some human interaction. So showing business impact on a workflow requires calculating composite metrics, many of which can be captured in the systems that start, end or track a workflow, such as your IT service management (ITSM) system or even the inbox for an email alias.

  1. When Automation Runs Frequently 

Often automation runs much more frequently than humans have time to do it. In such a case, different measurements are required to truly capture the business value generated. Take that data-transfer job previously mentioned, which saves so much time it reduces weekly personnel costs by 99%. But consider this: It doesn’t just reduce staff costs, it also runs much faster than before, meaning the organization can increase its update frequency, from once a week to more than once daily. This could be compared to having more people manually running this job.

However, it would not make sense in this instance to report the outrageously good result if we ran this process every five minutes. This would amount to something outlandish such as labor savings of 500 employees per week. Instead, an alternative approach is required. In this case, it would eschew measuring each update execution in favor of measuring every instance of someone pulling the data as if a human had to gather fresh data right then. That would result in a more realistic value calculation of 100 hours of cycle savings.

  1. The Value of Improved Operations

Many batch processes involve validation and verification of data. An organization could measure the labor saved from replacing human toil with automation, but this would be an inadequate measure of the true potential business value. Automating validation checks could reduce liability caused by missed deadlines, for example, in which case it would be more accurate to express delivered business value as reduction in liability. Or an automated verification process could improve the quality of operations, in which case it may be better to use metrics like reduced downtime, operational cost savings or reductions in end-user on-call rotations.

Going High Level

Detailed metrics are one thing. But some executives will be satisfied with higher-level metrics to assess the impact of process automation. In this case, it’s vital to understand the relevance and contribution of those automated workflows to the top-line KPIs of the business processes they help implement. This will be highly dependent on workflow, business process and organization.

To get there, first understand the workflow in more detail. Is it a core process that helps to drive revenue, a supporting process related to business expense or cost control, or a management process linked to governance and risk reduction? Next, understand the cycle value of the workflow to be automated or the impact of its iterations. Things like cost improvements, faster processing times, improved quality and higher productivity should be linked to the business function KPIs of the process.

Finally, it’s time to analyze ROI. For a monetary calculation, consider metrics such as cost of automation, reduced costs of workflow execution and error reduction, and opportunity delivered by freeing people to work on higher-value tasks. Some calculations may want to add qualitative improvements that are harder to translate into costs, such as improved employee morale, customer satisfaction and net promoter scores.

Make It Personal

To show automation ROI, identifying the right workflow metrics and tying them to KPIs and goals for the relevant business process is a great place to start. Above all, remember to customize any approach to the relevant stakeholders, their priorities and the benefits that matter most to them. After all, it takes more than great ideas to secure funding and buy-in for process automation.

The post Four Ways to Win Executive Buy-In for Automation appeared first on The New Stack.

]]>
Mindset Refactor: Evolving for Developer Success https://thenewstack.io/mindset-refactor-evolving-for-developer-success/ Tue, 25 Jul 2023 14:37:16 +0000 https://thenewstack.io/?p=22714001

During the early stages of the beta experience for our next-generation platform, we had a number of discussions with the

The post Mindset Refactor: Evolving for Developer Success appeared first on The New Stack.

]]>

During the early stages of the beta experience for our next-generation platform, we had a number of discussions with the developer community about what they needed to successfully take advantage of our modular automation capabilities. One recurring piece of feedback was that if our platform was going to evolve from its monolithic and one-size-fits-all structure, then the onboarding path for developers would need to evolve along with it.

Developers were already struggling with a lack of resources, constant iteration and limited documentation, so it was imperative we’d need to refresh our onboarding efforts with clear documentation, hands-on templates, engaging videos and immersive tutorials that catered to all learning styles. Only then could we confidently say we’d fostered an environment to enable developer success — one that would inspire, retain and attract developers by making their working lives simpler, more pleasant and more productive.

Getting there, however, was a process. We had to completely reimagine the experience for developers by becoming the students ourselves — the “developer zero” of our own platform. This change in perspective allowed us to create an improved path to building on our new platform, and we believe that embracing that same beginner’s mindset can also do wonders for developers as they navigate our new features and tools. Let’s explore how other teams can apply this kind of thinking.

Start from a Place of Empathy

As a developer, you have the opportunity to create automations that can streamline processes and enhance productivity for others. But first, you need a deep understanding of what the non-technical users within your organization need and the challenges they’re up against. Put yourself in the shoes of a first-time user. Listen attentively, observe their frustrations and ask probing questions to get at the root of their pain points. Communication on your end then becomes key. Make them feel heard and understood by conveying how you interpreted their problems and how your solutions can significantly improve their working lives. Offer a space for continuous feedback so the experience is more collaborative.

Let Go of Your Ego

Focus on the big picture, outside of yourself, and embrace the possibility of starting over. For developers building automations on the Slack platform, that could mean starting from scratch, rewriting existing code or revising use cases that are no longer relevant. Remember that the focus isn’t on your skills or how proficient you believe you are at your job. Instead, the emphasis is on serving your audience effectively.

Pro tip from us: Frameworks like Bolt, which allow for the adaptation of existing apps to our new modular architecture, can make this experience a lot more smooth.

Embrace Change

Change can be scary — or at least associated with uncertainty and discomfort. However, it’s essential to recognize that these elements can catalyze innovation and progress. For example, at Slack, we have a unique approach to addressing discomfort we like to call “hugging the elephant.” Create a space where you can openly discuss and confront the issues that make you and your team uneasy. On the other side of that uncertainty or uneasiness can come unexpected innovation.

Be Curious

One of the most fascinating aspects of adopting a beginner’s mindset is the boundless potential for exploration. Curiosity breeds a culture of iteration and experimentation. Taking more calculated risks, pushing boundaries and challenging existing assumptions can play a crucial role in collaboration and knowledge sharing. You’ll thrive when you engage in meaningful discussions, seek constant feedback and run impactful experiments.

The learning journey is far from linear, but adopting these methods can transform challenges into opportunities for growth. At Slack, doing so allowed us to successfully address onboarding pain points and enhance the developer experience on our platform, including introducing open sourced sample apps, educational videos, tutorials and documentation. As we continue our professional journeys, let’s remember the value of approaching challenges with the openness and curiosity of a beginner.

The post Mindset Refactor: Evolving for Developer Success appeared first on The New Stack.

]]>
VoxPop: New TNS Weekly Survey Wants to Know What You Think https://thenewstack.io/voxpop-new-tns-weekly-survey-wants-to-know-what-you-think/ Mon, 24 Jul 2023 17:52:40 +0000 https://thenewstack.io/?p=22714018

What’s on your mind? The New Stack wants to know! Starting this week, we will post a question on the

The post VoxPop: New TNS Weekly Survey Wants to Know What You Think appeared first on The New Stack.

]]>

What’s on your mind? The New Stack wants to know!

Starting this week, we will post a question on the our TNS home page that we are hoping you will want you to answer.

It will only take a second, and we promise it’ll be fun, or at least as painless as possible.

We call this survey “VoxPop,” which means “Voice of the People.”

We will ask a new question each week. For instance, this week’s question is:

Will AI replace software engineers in the near future?

You will always be able to find the question on the banner on top of the home page.

And, for this week, the possible answers are:

  • It’s no threat. Stochastic parrots can’t debug code because they don’t comprehend it in the first place.
  • It can’t be trusted. It’s like having a team member that’s on drugs and lies to you, a lot.
  • I sure hope not, I have bills to pay.
  • Maybe not “replace”, but it’s already proven to be beneficial to the profession, and this is only the beginning.
  • I for one welcome our new AI overlords.

For topic matter, we’ll look at the news of the week in our community and devise a question and a cheeky set of answers that hopefully cover the range of possible responses.

Eventually, we’ll set up a forum where readers can discuss these topics further (we were thinking LinkedIn, but are also open to other platforms).

In the meantime, we will discuss the results in each edition of the weekly TNS newsletter. So subscribe if you want to find out what your peers are thinking.

For TNS, the survey question will help better determine what topics our readers are interested in, which will help us better select stories down the road. Our sponsors may also use the survey from time to time to answer questions they may have about our community.

Your answers, and any other related information we gather, will be strictly confidential.

And if you have any questions you’d like us to consider for VoxPop, please drop us a line.

The post VoxPop: New TNS Weekly Survey Wants to Know What You Think appeared first on The New Stack.

]]>
A New Book about ‘The Apple II Age’ Celebrates the Users https://thenewstack.io/a-new-book-about-the-apple-ii-age-celebrates-the-users/ Sun, 23 Jul 2023 13:00:17 +0000 https://thenewstack.io/?p=22713320

“More software is available for this computer than for any other machine in the world.” That’s a quote from a

The post A New Book about ‘The Apple II Age’ Celebrates the Users appeared first on The New Stack.

]]>

“More software is available for this computer than for any other machine in the world.”

That’s a quote from a 1984 computer buyer’s guide about the pioneering Apple II, cited in a new book titled The Apple II Age: How the Computer Became Personal. In the book, author Laine Nooney explores those fruitful seven years before Apple released its first Macintosh computers in 1984, when uses cases for these new-fangled “microcomputers” were still very undefined and fluid.

“The Apple II Age”

Nooney makes the heartfelt case that the Apple II’s most compelling story “isn’t found in the feat of its engineering,” or in the personalities of Wozniak and Jobs, “or the way it set the stage for the company’s multibillion-dollar future.” Instead, it’s about all those brave and curious people, the users, who came “Not to hack, but to play… Not to program, but to print… The story of personal computing in the United States is not about the evolution of hackers — it’s about the rise of everyday users.”

And you can trace their activities in perfect detail through the decades-old software programs they left behind…

It’s a fresh and original approach to the history of technology. Yes, the Apple II competed with Commodore’s PET 2001 and Tandy’s TRS-80. But Nooney, an assistant professor of media industries at New York University, notes that by 1983 Apple II computers had over 2,000 software programs available — more than any other microcomputer. So this trove of programs uniquely offers “a glimpse of what users did with their personal computers, or perhaps more tellingly, what users hoped their computers might do.”

Looking back in time, Nooney calls the period “one of unusually industrious and experimental software production, as mom-and-pop development houses cast about trying to create software that could satisfy the question, ‘What is a computer even good for?'”

Software Constellations

The book argues that the era generated “a remarkable range of answers,” proving that home computing “was an object of remarkable contestation, unclear utility, futurist fantasy, conservative imagination, and frequent aggravation for its users.”

The book’s jacket promises “a constellation of software creation stories,” with each chapter revisiting an especially iconic program that also represents an entire category of software. VisiCalc‘s ground-breaking calculator software represents “Business” applications, with the story of Dan Bricklin and Bob Frankston ending up legitimizing the Apple II as a powerful workplace tool. And the “Games” category is represented by Sierra On-line’s first illustrated text adventure, Mystery House, written by Roberta Williams and programmed by her husband Ken.

In May, Vice republished an excerpt from the book. Nooney describes the “roiling debate over copy protection” in 1981 — and how software publishers threatened a boycott against a computing magazine that had published an ad for Locksmith software (which bragged that it “copies the ‘uncopyable'” — including copy-protected disks). Nooney writes that “in long-forgotten software like Locksmith, we find a history of computing precisely about how people could use their computers, and a surprisingly human one at that.”

But the book ultimately focuses more heavily on the lessons that can be learned from what programmers envisioned for these strange new devices — and how the software-buying public did (or didn’t) respond…

It’s a surprisingly challenging perspective. “We’ve been told, over and over again, in countless forms and by myriad voices, that personal computing was, from the moment of its invention, instantly recognized as a revolutionary technology and eagerly taken up by the American public,” Nooney writes. “This is not true.” Instead, chapter six cites a market report that suggests only 6 million of America’s 84 million homes in 1983 even had a personal computer. By the late 1990s, still barely one-third of US households had purchased a computer (according to figures Nooney cites from the U.S. Department of Commerce).

“This is not television, which went from 4% to 89% of U.S. households in a decade,” Nooney emphasized in an online interview last week. “Computers were a hard sell. This wasn’t the cellphone.” So instead the earliest emergence of personal computing in America was “a wondrous mangle,” Nooney writes, saying it turned into an era where “overnight entrepreneurs hastily constructed a consumer computing supply chain where one had never previously existed.”

 Apple II typical configuration 1977 (Creative Commons via Wikipedia)

But at one point Nooney even talks of “rewiring our assumptions about personal computing.” In the epilogue Nooney describes the book as “a heist, tailored to rob as many people as possible of their much-cherished faith in computing’s primordial innocence by showing how compromised, fraught, and indifferent, to all of us, this history actually is,” forcing “a reckoning with why our fantasies of history take the shape they do…”

Nooney warns there’s a larger message: that computing “has always been a story of contexts, rather than triumphs.”

During last week’s interview with Internet Archive, Nooney issued this warning about how we handle our collective past. “The minute you try to own it — you rob it of its truth.”

Mining Old Data

It’s a long-standing fascination for Nooney, who is also participating in an ongoing project with Microsoft Research’s Kevin Driscoll. They’ve teamed up to data mine letters published in an influential early 1980s magazine dedicated to software for the Apple II — Softalk. Nooney’s personal web site points out that the dozens of letters generously published each month show the community’s camaraderie — and its “diversity of authorship… Together they form a window into a tight-knit early computing community.” But Nooney’s book also cites more scholarly results in chapter 6: the finding that 1980 to 1984 saw a very clear transition from “programming to products.” (That is, a shift away from hard-core computing hobbyists…)

This shows the kind of unique research that fed into the preparation of the book itself. And in its Acknowledgements section, Nooney also specifically thanks the online Internet Archive for its larger-than-usual role. Shortly after the book-writing began in 2020, lockdowns began to respond to the pandemic, and the Internet Archive’s resources were “in so many ways the reason this book exists.” Among its online offerings were archives of Softalk magazine issues from more than 40 years ago, along with other hobbyist publication from the Apple II age in the late 1970s. Nooney calls their existence “testament to the devotion and goodwill of a small, furiously dedicated community of retrocomputing enthusiasts…”

Author Laine Nooney (Fromm the author’s web site).

The magazines were not just scanned, but also transcribed through optical character recognition for easier searching. Nooney argues that in general, the breadth and scope of the Internet Archive leads to the production of new and different kinds of scholarly works. In fact, copies of the software mentioned in the book are hosted on the site — which Nooney says was invaluable. Along with some interviews conducted over Zoom (or by phone), all the research could be completed in time for the book’s publication.

Nooney finally dedicated the book to Margot Comstock, Softalk magazine’s editor and co-founder (who died last year at age 81). The book’s index shows Softalk magazine mentioned on dozens of pages, while the dedication says Comstock’s “passion for the Apple II left behind the trace that made this book possible.”

In an article for the Verge, Nooney called Comstock “one of the most important women in Apple’s history… who was so important in the early Apple II era that according to Doom creator John Romero, her nickname was ‘The Glue.'” The article praises Comstock as one of those people performing “the other work it takes to make an industry.

“Between the folds of history is the quiet labor of building forums, cultivating relationships, bridging social gaps, and doing the writerly and technical translating that makes complicated, opaque technology accessible and exciting to newcomers.”

Maybe Nooney’s book can accomplish some of the same things with its fresh look at the early days of home computing — and the way that it spotlights new areas for exploration. For its original perspective, the book has already drawn an enthusiastic blurb from Claire L. Evans, author of Broad Band: the Untold Story of the Women who Made the Internet.

Evans praises the book’s “rich cast of software visionaries,” while adding approvingly that it also “complicates and enriches the men-in-garages Silicon Valley mythology we all know…”

And Evans ultimately calls the book “a gift to all curious technophiles.”

Screenshot of Laine Nooney talk at Archive dot org

Screenshot from Laine Nooney’s interview at Internet Archive

The post A New Book about ‘The Apple II Age’ Celebrates the Users appeared first on The New Stack.

]]>
Kevin Mitnick: A Hacker Hero Has Died https://thenewstack.io/kevin-mitnick-a-hacker-hero-has-died/ Thu, 20 Jul 2023 17:56:25 +0000 https://thenewstack.io/?p=22713781

Kevin Mitnick’s career began as a criminal hacker but ended too soon as the best-known white-hat hacker. I didn’t know

The post Kevin Mitnick: A Hacker Hero Has Died appeared first on The New Stack.

]]>

Kevin Mitnick’s career began as a criminal hacker but ended too soon as the best-known white-hat hacker.

I didn’t know Mitnick, the renowned hacker and cybersecurity expert, well. But I did know him well enough to know he was brilliant. However, I’ve been blessed to know many bright people. But, I’ve known few whose life was a testament to the transformative power of second chances and the potential for redemption. His journey from notorious hacker to respected cybersecurity consultant was as remarkable as it was inspiring.

Born on August 6, 1963, in Van Nuys, California, Mitnick developed a fascination with technology began an early age. He was just 12 when he executed his first hack, manipulating the Los Angeles bus punch card system to ride for free. His exploits escalated from there, culminating in a series of high-profile hacks that made him a wanted man.

Mitnick’s hacking activities in the 1980s and 1990s were legendary. He breached the systems of some of the biggest corporations, including Digital Equipment Corporation (DEC), IBM, Motorola, and Nokia. As a teenager, he became famous for infiltrating the North American Defense Command (NORAD). This episode would foreshadow the movie War Games. These exploits earned him a place on the FBI’s Most Wanted list, leading to his arrest in 1995.

After serving five years in prison, Mitnick emerged with a new purpose. He leveraged his deep understanding of hacking to become a leading voice in cybersecurity. He founded Mitnick Security Consulting. There he used his unique insights to help businesses protect themselves from the threats he once posed. In 2011, he became the Chief Hacking Officer and part owner of the security awareness training company KnowBe4. He also spent much of his time working with his Global Ghost Team, an elite pen-testing team

Mitnick was also a prolific author, penning several books on cybersecurity, including “The Art of Invisibility” and “Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker.” These works have become essential reading for anyone interested in understanding the mind of a hacker and the vulnerabilities of our interconnected world.

Mitnick was widely respected in the tech community despite, or perhaps because of, his past. His transformation from a symbol of cyber menace to a beacon of cybersecurity knowledge was a testament to his resilience and adaptability. He was a frequent speaker at tech conferences, where he shared his experiences and insights, always with a sense of humor and a twinkle in his eye.

His passing is a significant loss to the cybersecurity community. His contributions to the field, unique insights, and relentless advocacy for better security practices have left an indelible mark. Kevin Mitnick’s legacy will continue to influence and inspire future generations of cybersecurity professionals.

Mitnick, who died of pancreatic cancer on Sunday, July 16, 2023, leaves behind his beloved wife, Kimberley Mitnick, and their unborn first child. His family asks for donations to be made in his memory to The National Pancreas Foundation or The Equal Justice Initiative.

The post Kevin Mitnick: A Hacker Hero Has Died appeared first on The New Stack.

]]>
Westrum‘s Organizational Cultures Are Vital but Misunderstood https://thenewstack.io/westrums-organizational-cultures-are-vital-but-misunderstood/ Tue, 18 Jul 2023 14:34:05 +0000 https://thenewstack.io/?p=22713405

In 1988, sociologist Ron Westrum created a typology of organizational culture and published his findings in a 2004 health industry

The post Westrum‘s Organizational Cultures Are Vital but Misunderstood appeared first on The New Stack.

]]>

In 1988, sociologist Ron Westrum created a typology of organizational culture and published his findings in a 2004 health industry paper. Westrum’s classification system has become one of the best ways to understand and predict an organization’s performance. The research into generative organization culture has become part of the DevOps Research and Assessment (DORA) Core collection, a set of capabilities, metrics and outcomes proven to endure through multiple years of study.

In this article, you’ll find out why Westrum’s typology is a reliable way to assess culture. You’ll then discover why the names given to different cultures can cause confusion and how to avoid misleading interpretations based on these labels.

Culture Is How You Accept and Process Information

To create a classification system for culture, you need to decide which demonstrable outcomes to measure. People often think of employee satisfaction or staff retention as suitable cultural measures.

With a clear idea of outcomes, you can create a hypothesis and design an experiment. You might look at the benefits package, office facilities, availability of flexible working or the presence of a well-stocked refrigerator. You could establish the relationship between these variables and employee satisfaction by surveying employees from many different organizations.

It seems common sense that well-paid employees with cold drinks on tap would be more satisfied than those on lower pay with no complimentary drinks. But salary and free drinks are poor predictors of a healthy culture. You can have a high salary, free drinks and a ping-pong table and still be rushing for the exit.

Westrum took a different approach. He wanted to understand how culture affected safety, not employee satisfaction or retention. To measure this, he was interested in the characteristics of information flow. Would better behavior around information result in a safer organization?

The classification system was interested in things like:

  • How much cooperation is there between departments?
  • What happens to people who raise problems?
  • Who takes responsibility when things go wrong?
  • Can different departments speak to each other without going through managers?
  • What happens to new ideas?

Westrum devised a robust way to test the cultural climate by focusing on information flow. The properties of information flow predict how an organization responds to safety-critical situations, but it also provides a strong indication of how they handle everyday events.

The three cultures model is based on information flow being the more critical issue for organizational safety.

Pathological Bureaucratic Generative
Power-oriented Rule-oriented Performance-oriented
Low cooperation Modest cooperation High cooperation
Messengers “shot” Messengers neglected Messengers trained
Responsibilities shirked Narrow responsibilities Risks are shared
Bridging discouraged Bridging tolerated Bridging encouraged
Failure leads to scapegoating Failure leads to justice Failure leads to inquiry
Novelty crushed Novelty leads to problems Novelty implemented

A generative culture is high trust and low blame. The Accelerate State of DevOps Report has found that generative culture predicts better software delivery performance and increased job satisfaction. Crucially, it also predicts better goal attainment at the organizational level. Simply put, a healthy culture is a profitable culture.

Are Cultures Mislabeled?

The problem with Westrum’s typology is that the labels cause confusion. The picture we have in our heads for terms like “pathological” and “bureaucratic” will likely differ from the specific properties described in the typology.

A bureaucratic organization reminds me of my years working in the finance industry. We all wore suits and ties and did things by the book. We were disciplined and careful because we handled critical life savings and were highly regulated. This kind of bureaucracy felt appropriate to the task at hand.

The suits and ties, discipline and attention to detail are not properties of Westrum’s bureaucratic culture. Modest cooperation, neglected messengers and problematic response to novelty are very different from the professional discipline I imagine from my time in finance.

Equally, people think a generative culture is loose and easy, which isn’t the case. A generative culture can act with high discipline. It can operate in regulated or safety-critical environments that require disciplined execution. Some employees in a generative culture might even wear pinstripes.

Culture isn’t the presence or absence of processes, rules and controls. It’s the quality and flow of information and the response to failures in the system. Imagine an airline pilot reporting a near miss. A pathological culture would silence the pilot to avoid bad publicity, a bureaucracy would ignore them and a generative culture would explore how to stop near misses from becoming terrible accidents.

We all have an idea of what it means to be pathological, bureaucratic or generative, and these ideas are often a poor match for Westrum’s definitions. This isn’t a failure on Professor Westrum’s part. The labels are appropriate in many ways. But the subjectivity of the terms makes them open to misinterpretation.

As a thought exercise, you could re-label the cultures to clarify that a pathological culture is aggressive, a bureaucratic culture is rigidly fixed and a generative culture encourages growth and learning.

Aggressive Fixed Growth
Power-oriented Rule-oriented Performance-oriented
Low cooperation Modest cooperation High cooperation
Messengers “shot” Messengers neglected Messengers trained
Responsibilities shirked Narrow responsibilities Risks are shared
Bridging discouraged Bridging tolerated Bridging encouraged
Failure leads to scapegoating Failure leads to justice Failure leads to inquiry
Novelty crushed Novelty leads to problems Novelty implemented

Building a Healthy Organization by Embracing a Generative/Growth Model

An organization shouldn’t convene a meeting to choose its culture based on its industry or circumstances. Generative cultures are the healthy way to run an organization; the other types represent anti-patterns.

Whether you are in a safety-critical organization or simply one that wants to achieve its goals, Westrum’s research found a generative culture performs best.

The post Westrum‘s Organizational Cultures Are Vital but Misunderstood appeared first on The New Stack.

]]>
Poll: One-Third of Mastodon Users Won’t Follow Threads Users https://thenewstack.io/poll-one-third-of-mastodon-users-wont-follow-threads-users/ Tue, 18 Jul 2023 13:46:58 +0000 https://thenewstack.io/?p=22713462

Over the weekend, I ran a poll on Mastodon that asked the following question: “If Threads goes ahead with its

The post Poll: One-Third of Mastodon Users Won’t Follow Threads Users appeared first on The New Stack.

]]>

Over the weekend, I ran a poll on Mastodon that asked the following question: “If Threads goes ahead with its plan to add ActivityPub, will you follow one or more Threads users in your Mastodon account?” The poll received 3,889 replies before closing (helped by a “boost” from Mastodon creator Eugen Rochko), so it was a statistically meaningful response. Surprisingly, one-third of respondents voted “hell no!”.

The majority of people (57%) voted “heck yeah!” to following at least one Threads user, so there is certainly hope yet for Meta’s foray into the fediverse. The remaining 10% voted “other” — and judging by the comments, usually this meant they either hadn’t decided or it would depend on how the federation between Mastodon and Threads will be implemented.

fedi poll

Regardless of how many of the 10% will decide to interact with Threads users, it’s interesting that a full one-third of Mastodon’s current user base do not plan to follow anyone on Threads. It shows there is a lot of discontent in the wider Mastodon community about Meta’s plan to join the fediverse.

First, let’s clarify the different ways an instance (a Mastodon server) can choose to deal with Threads. The two obvious ones are to either federate with Threads (so it will be part of the instance’s extended network) or not federate (which blocks all Threads users via a “domain block”). But there are other, more subtle, options — such as what used to be called “silencing.” It’s now called “limiting” and, in the case of Threads, it would mean Threads isn’t featured in the federated feed but users can decide for themselves if they want to follow individual Threads users. Here’s how the Mastodon documentation describes this:

“A limited account is hidden to all other users on that instance, except for its followers. All of the content is still there, and it can still be found via search, mentions, and following, but the content is invisible publicly.”

Why Did One-Third of Mastodon Users Vote No?

Okay, let’s get back to individual user preferences across the Mastodon network. In particular, why would 33% of Mastodon users not want to follow at least one Threads user?

For many, the likelihood that the user base of Threads and/or Instagram includes hateful people or groups is reason enough not to federate. The word “nazi” was used multiple times to convey this sentiment.

Another prominent reason given was that Meta will unduly influence the fediverse, if/when it reaches a position of power. This comment by Erik Uden, who is the administrator of an instance called Mastodon.de, is representative:

“I think we should learn from our mistakes, especially considering how Google Hangouts was initially praised as “look, a big corporation now uses this free protocol [XMPP], it will make our instant messaging client so famous!” when in reality it lead to Google dominating the Messenger and later cutting support for XMPP, killing the once decentralized and feature rich platform almost entirely.”

Another reason proffered is that because Threads is so large (at time of writing it is well over 100 million users, but could easily approach 1 billion in the near future), it may overwhelm the much smaller servers of Mastodon.

Finally, many people simply view Meta as an untrustworthy entity. “If there is any technical hassle involved or I feel like just by following someone I might be playing into Meta‘s hands, I would probably think it is not worth it and stop,” commented Sonja Pieps. “I am absolutely in favor of treading extremely carefully around an entity as untrustworthy as Meta.”

Who’s Actually Making the Decision About Threads?

At least one commenter noted that you don’t have to rely on your instance to block Threads — you could also block the domain “threads.net” as an individual user, regardless of what your instance decides to do. But even that stance would be slightly controversial; as another commenter put it, there are “good reasons to not want to be on, nor support, a server that federates with Facebook or similar.”

A number of people commented that the decision will be made for them by the administrators of their instance. For some users, this presents a dilemma:

“If they block #Threads, I won’t move. If we federate with #Threads, it really depends on how many other communities then block us. There’s essentially no UX for that, you’ll just silently lose follows/followers.”

It was also interesting to see that some Mastodon users view existing large instances with suspicion. I’m currently on the largest instance, Mastodon.social, which is run by Eugen Rochko. One person commented that “I’ll probably treat them [Threads] the same way I treat mastodon dot social users: with a healthy dose of caution and a daily prayer of defederation.”

This seems a little harsh, because Mastodon.social is the easiest way to join Mastodon — hence it has become a default for many new users. And even though I’ve been on there for years now, I think it’s well managed and so I don’t see any reason to move to another instance.

Others disagree. Wrote one commenter: “I’ve encountered two major problems with mastodon dot social: spam/bots, and in my experience the majority of its users don’t include #AltText and #CW.” (It’s worth noting that many Mastodon newbies have been turned off by this kind of attitude — that is, existing Mastodon users telling them the “right way” to use the product.)

What Are the Reasons to Follow Threads Users on Mastodon

Despite the hard-line stance some Mastodon users are taking with Threads, a lot of users have taken a more pragmatic stance (indeed, it’s fair to say that the majority of users are like this, if you believe the poll results). A Mastodon user from New Zealand noted:

“I’d love to stay in touch with friends who are on Instagram without actually having to use Instagram myself. It would, in effect, allow *me* to consolidate my socmed [social media] accounts to just Fediverse accounts. Win!”

If you assume that person meant Threads as well as Instagram, it’s a great point — many of Threads’ users have not wanted to sign up to Mastodon so far, and so perhaps Threads is a good middle ground for Mastodon users to meet them on.

Others noted that the lack of an algorithmic timeline on Mastodon is a plus and would allow them to follow Threads users unfiltered. “There will surely be some interesting people there, and the ability to consume their content without a ‘managed’ timeline would be really welcome,” wrote Andy Davidson.

There’s also the fact that some communities just aren’t well represented on Mastodon, and so this will enable Mastodon users to keep track of them. “So many of the communities I followed on Twitter outside of Tech aren’t present or are too small here,” said Jared Gaut. “Already found that some of those are better represented on Threads after a week.”

Conclusion

It’s hard not to see this poll result as an augery of trouble for the fediverse. If one-third of Mastodon users are resistant to federating with Meta, then what does that mean for the principle of decentralization on the web? Is it ok to pick and choose who can join the fediverse community — or worse, for administrators of instances to pick and choose, and not the users themselves?

We shall see what happens once Threads turns on its ActivityPub support (as yet, that hasn’t happened). As a final note, despite the massive user base that Threads already has, nobody in Europe has been allowed to sign up so far — a consequence of Europe’s privacy regulations. That begs the question: which will Threads connect to first, Mastodon or Europe?

The post Poll: One-Third of Mastodon Users Won’t Follow Threads Users appeared first on The New Stack.

]]>
An Argument Against Sovereign AI, but for Sector-Based AI https://thenewstack.io/an-argument-against-sovereign-ai-but-for-sector-based-ai/ Sat, 15 Jul 2023 15:00:54 +0000 https://thenewstack.io/?p=22713275

When Britain’s Prime Minister declared that the UK should develop a Sovereign AI, it was clearly intended as a soundbite,

The post An Argument Against Sovereign AI, but for Sector-Based AI appeared first on The New Stack.

]]>

When Britain’s Prime Minister declared that the UK should develop a Sovereign AI, it was clearly intended as a soundbite, leveraging the excitement caused by GPT and Large Language Models (LLMs). The term itself refers to the concept of artificial intelligence (AI) systems that are under the control and governance of a specific nation or state. But because of growing fears about rogue AI, and doubts over the UK obtaining enough chips to pursue exascale supercomputing, the story did not grip the industry. Yet the idea does have mileage, and other nations (notably India and Taiwan) have similar ambitions.

Also, the Sovereign AI approach could actually work in certain sectors of a nation, which I will explain in this article.

Identity, Security, Regulatory

Now, while recognizing the purely political aspects of governments pumping money into national computing champions, there are still valid reasons for valuing the strategic approaches implied by Sovereign AI, which is what this post looks at. The intention is to keep LLMs cognisant of national identity, national security, and national regulatory systems — as far as that makes sense. Some of this stems from a belief that the dominance of Google over the years has helped the US government. It is certainly true that the concerns of the White House get big tech’s attention first.

There is no question that the choice of ingested documents is reflected through LLM responses — OpenAI has run into trouble with taste issues in various territories. Thus, controlling the LLM learning process should reduce the likelihood of anomalous narratives. Similarly, how information is retained and used has different legal implications in different places; and ensuring it works within a nation’s regulations is clearly best done within that regulatory space. Security is a measure of control over the physical process of using the neural networks, storing the LLMs and disseminating responses. It is also making a positive of the fact the system is closed.

If, as a Brit, I ask ChatGPT a question like “Explain what ‘the House’ means in politics,” it shows both well-known examples of a bicameral chamber, but it could be argued that I only want the British example. Given that I log in to use these services, it is quite possible that OpenAI could or does alter the answers depending on my locale. But most likely it is just using existing documents on the web to create an informative response:

If this answer was given to a Brazilian, they might be understandably miffed. It must be the case that if an LLM trains only on national documents, then a purely national answer will be forthcoming.

But wait a minute. Do we want it to learn to talk in a purely bureaucratic political language? And are we saying that a paper about the House of Commons written by a French academic must have the wrong values? Real understanding is based on a mix of narrow and broad subject analysis. Some of that should be from the outside, so to speak.

These are genuine misunderstandings of language structure versus message content. If training documents are just policy data, the LLM will understand language structure from a good source, but will only “understand” how to construct a good-looking answer.

Given the success that Google has with localizing search responses, I don’t think there is a solid national identity reason for assuming that there will be much to gain by weaning an LLM on a diet of purely national resources. But the other reasons are better.

Sectorial Sovereignty

The Finnish AuroraAI program doesn’t look as if it is attempting to reject big tech sensibility; just trying to break internal silos and allocate resources more sensibly across service providers. This is a very traditional target for IT improvement, but an LLM that can read across specifications and legislation while independently spawning sub-queries in the right databases could well deliver very satisfactory results for Finns.

In short, we should not worry about the national identity meaning of “sovereign”, but look at what purpose a secure and curated system could have in various sectors.

There are two areas where the case for this looks quite strong.

The law is usually described by a well-defined corpus of documents that is used to generate further case law. Surely this mirrors how LLMs operate? So the predictions that AI will be used extensively throughout the legal world feel very likely to come about. Even within the cosseted ranks of this very arcane profession, this is no longer particularly controversial. Due diligence and litigation preparation are known to be just hard work for a legal mind. Instead of employing poorly paid junior lawyers to read a lot in a hurry, a ChatGPTLaw could deliver the goods. As everyone reading here should know, technology has never truly destroyed an industry; it just shifts the work higher up the value chain. AI will likely make legal provisions easier to obtain and thus increase the number of legal practitioners.

The other example is taken from a field where data should be secure, and the AI can be trained to draw conclusions while maintaining the anonymity of the response. Health data is, unfortunately, extremely valuable to those who are unlikely to use it for customer benefit. Insurance companies would love to know in advance about who not to insure. But anonymising health data too early renders it useless. For example, if location information is messed with, early detection of epidemic outbreaks is no longer possible. Similarly if you omit race and sex data, important trends can simply be lost. This leads to the idea that if the data had an AI sentry that could assess how (and with who) to respond with queries without breaking confidentiality, that would encourage even more research.

The New Zookeepers

In conclusion, I’m not sure there is much mileage in thinking about “Sovereign AI” as something that will be approached by nations, but the same approach in certain sectors does seem inevitable.

And there is a solid likelihood of technical jobs opening up for people with dual skills. Understanding whether words with multiple meanings are embedded correctly (homonymy) needs technical and subject knowledge. It is no simple task to make an LLM unlearn.

The curators of the learning materials used by sovereign LLMs could become a new professionalized caste. Where to look for documents and when to hold back inclusion are both decisions that require real knowledge of an area. But they are more than just zookeepers to an exotic species. In the very close future, LLMs will not be judged like performing circus beasts, but by their response accuracy.

The post An Argument Against Sovereign AI, but for Sector-Based AI appeared first on The New Stack.

]]>