Frontend Development Overview, News & Trends | The New Stack https://thenewstack.io/frontend-development/ Tue, 26 Sep 2023 15:15:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 The Angular Renaissance: Why Frontend Devs Should Revisit It https://thenewstack.io/the-angular-renaissance-why-frontend-devs-should-revisit-it/ Tue, 26 Sep 2023 15:15:23 +0000 https://thenewstack.io/?p=22719172

Frontend developer Alyssa Nicoll has a message for other frontend developers: Angular is experiencing a renaissance moment and it’s time

The post The Angular Renaissance: Why Frontend Devs Should Revisit It appeared first on The New Stack.

]]>

Frontend developer Alyssa Nicoll has a message for other frontend developers: Angular is experiencing a renaissance moment and it’s time to revisit this JavaScript framework.

Nicoll is the host of the Angular Air podcast and a senior developer advocate for Progress, where her duties include working with the Google Angular team. The TypeScript-based framework has changed significantly since 2015 and continues to expand its features, Nicoll said. That’s when Google rewrote AngularJS to create a completely different framework, Angular 2+. Then, for a long time, the Angular team was rewriting its base view engine called Ivy. With that now finished, the Angular team has unleashed upgrades and new features that it had previously put off, she said. As a result, Angular is becoming more friendly to all users.

“That’s why I relate it to the Renaissance, because not only is there just a bunch of activity and freshness and creativity coming in, but also it’s geared towards the developer experience,” Nicoll said. “A big part of the [Renaissance] movement was this humanist aspect of human growth and human potential. This movement in Angular focuses on the developer experience and the human potential to use Angular and make it more friendly to all users.”

That may be a hard sell. Numbers from a 2023 Stack Overflow survey show framework use overall is declining and in particular, Angular use in the past year fell 24%. Meanwhile, Svelte and Deno use increased by approximately 62 and 61% respectively. Angular controls about 18% of the framework “market,” while React still leads with nearly 41% adoption.

Adoption decline is part of what’s driving the change.

“The Angular team … is very conscious of this developer experience as we’re on onboarding new developers, because if we don’t have an influx of new developers, our community will slowly wither,” Nicoll said.

There are three key ways the framework is evolving that make it worth revisiting, Nicoll said.

1. Moving off Modules

Angular is unique among JavaScript frameworks in that the smallest chunk of code is not a component, but a module. Modules are wrappers that hold things like dependencies and shared functionality or even routing, Nicoll explained.

The movement away from modules may cause a struggle among “Angularites” who are accustomed to them, but it will make it easier for other developers to understand the framework, she said.

“Even people who have been doing Angular for a long time, once they stop using modules, they start to see the potential going forward of what our framework could look like,” she said. “It makes us […] more compatible to other JavaScript developers. If they need to quickly jump on an Angular project — because there’s a lot of teams that have an Angular project, or a React project or a Vue project — it makes it easier for people to grok our framework, because the base pieces, the Legos, look the same.”

For Angular veterans, Nicoll recommended against a rip-and-replace approach for in-production applications.

“You can remove your application module, the base one that bootstraps the whole app. [But] I would not recommend it, because the community itself, I don’t believe, is supporting that yet,” she cautioned. “If you do that, a lot of your dependencies will probably start screaming at you, because they can no longer find your application because they depend on that base module to tell them who and what this app is, and ‘how do I fit into this world.’”

The tools and dependencies to support this base structure of Angular applications are still evolving, she added, and are just not “there” yet.

“I’d say absolutely go and remove the modules from your components or make new components or pipes without them, or directives, but maybe, maybe hang off a bit in production applications with removing your base module, unless you’re very, very sure of all the dependencies that you have and how they integrate with your application,” she said.

2. Adding Signals to Angular

Angular is adding signals, giving it a “built-in primitive for reactivity,” Nicoll said. Signals will allow developers to easily manage and respond to changes in their applications. It has the potential to revolutionize the way developers approach reactive programming and make it more accessible to a wider range of developers, she contended.

“React and a lot of other frameworks, they have this concept [of signals] even on the .Net side.” Nicoll said. “This is in a way Angular catching up and also kind of making it cooler.”

A signal is an object that has a value and can be observed for changes. They are similar to React state, but according to Google Bard, offer a few key advantages:

  • Signals can be shared between components without having to pass them down as props.
  • Signals are only updated when they are needed, which can improve performance for large applications.
  • Signals can be used to create complex state management patterns, such as Redux and MobX.

Currently, Angular offers observables for reactivity, combining it with on-push. While it works, there’s a cost, she’s said.

“The cost of using observables and using on push, when you come down to it, is zone.js — and a lot of Angular developers at this point will kind of make a throwing up face when you mentioned zone.js, or maybe like cross themselves — and change detection,” she said. “A built-in reactive primitive like signals does not have that cost.”

That change detection, as it’s now known, probably won’t exist in the future of Angular, she added, which will translate into faster load times, a quicker application, and potentially even quicker development, she said.

3. Control Flow

The proposed new control flow syntax is “heavily inspired by Svelte’s control flow, as well as the Mustache templating language,” Nicoll explained in a presentation on the topic she shared with The New Stack. “Think inline ifs, elses, switch statements, and defers.”

Control flow allows if and else statements to be put inside a template, so that developers can essentially load things or even defer the loading of things (like images) until the user is going to need it or is scrolling towards it, she explained.

“All of these things to make a better […] user experience on the other end of an Angular application,” she said. “And all of it, every single thing that I’ve mentioned, is opt-in; they’re not forcing you to change anything about the way that you do Angular, and nothing will break. And I think that’s a very important promise they intend to continue keeping for the Angular community.”

The post The Angular Renaissance: Why Frontend Devs Should Revisit It appeared first on The New Stack.

]]>
Dev News: Svelte 5, AI Bot for Android Studio, and GitHub Tools https://thenewstack.io/dev-news-svelte-5-ai-bot-for-android-studio-and-github-tools/ Sat, 23 Sep 2023 11:00:22 +0000 https://thenewstack.io/?p=22719012

Rich Harris offered a preview of Svelte 5 in a recent blog post and video. What’s new? Harris introduced a

The post Dev News: Svelte 5, AI Bot for Android Studio, and GitHub Tools appeared first on The New Stack.

]]>

Rich Harris offered a preview of Svelte 5 in a recent blog post and video. What’s new? Harris introduced a new way to handle reactivity in Svelte called Runes.

Reactivity is a programming concept in which data update based on its dependencies, as software engineer Tom Smykowski demonstrated in this blog post.

Some developers on Twitter have compared it to React’s hooks. Smykowski observed that each framework handles reactivity a little bit differently and compared Runes to Angular’s Signals and React’s use of an “explicit list of dependencies to handle fine-grained reactive updates.”

A release date for Svelte 5 has not been set, Harris added.

Google’s Release of Studio Bot to Android Studio

Google released its AI-powered coding assistant, Studio Bot, in the Android Studio canary build and made it available to more than 170 countries — although it’s still designed to be used in English. Studio Bot understands natural language and is so far just designed to be used in English.

“You can enter your questions in Studio Bot’s chat window ranging from very simple and open-ended ones to specific problems that you need help with,” the press release explained.

It remembers the context so that you can ask follow-up questions, e.g., “Can you give me the code for this in Kotlin” or “Can you show me how to do it in Compose.” Developers don’t need to send in source code to use Studio Bot.

“By default, Studio Bot’s responses are purely based on conversation history, and you control whether you want to share additional context or code for customized responses,” Google stated.

That said, Studio Bot is still a work in progress, so Google recommends validating its response before using it in a production app.

GitHub Launches Innovation Graph, Adds Atlassian Migration Support

GitHub on Thursday launched its GitHub Innovation Graph, an open data and insights platform on the global and local impact of developers.

The Innovation Graph includes longitudinal metrics on software development for economies around the world. The website and repository provides quarterly data dating back to 2020 on git pushes, developers, organizations, repositories, languages, licenses, topics, and economy collaborators. The platform offers a number of data visualizations, and the repository outlines the methodology. Data for each metric is available to download.

“In research commissioned by GitHub, consultancy Tattle found that researchers in the international development, public policy, and economics fields were interested in using GitHub data but faced many barriers in obtaining and using that data,” the company said in a news release. “We intend for the Innovation Graph to lower those barriers. Researchers in other fields will also benefit from convenient, aggregated data that may have previously required third-party data providers if it was available at all.”

Graph from GitHub Innovation Graph

Graph created by GitHub Innovation Graph

GitHub also announced this week its adding support for migrations to two tools: GitHub Enterprise Importer now supports customers using BitBucket Server and Bitbucket Data Center, and GitHub Actions Importer can now help developers pivot off Atlassian’s CI/CD products.

GitHub Actions Importer eliminates the manual process of CI migrations and automates the evaluation and testing of the CI migration of nearly a quarter million pipelines, the company said in a statement. GitHub Actions Importer allows developers to move from any of Atlassian’s CI/CD products — Bitbucket, Bamboo Server, and Bamboo Data Center — to GitHub Actions. After Feb. 15, 2024, Atlassian will no longer offer technical support, security updates or vulnerability fixes for their Server products like Bitbucket Server and Bamboo Server, according to GitHub.

DockerCon 2023 Runs Oct. 3-5

DockerCon is back with both live and virtual options this year Oct. 3-5. The live conference is at the MagicBox in Los Angeles and runs Wednesday and Thursday. Tuesday is a workshop day, which is an additional add-on. The virtual ticket includes the live keynotes and select educational sessions.

Topics to be covered during the conference include:

  • Web Application
  • Web Development
  • Building and Deploying Applications
  • Secure Software Delivery
  • Innovation and Agility
  • Open Source
  • Emerging Trends

Vercel Launches Serverless Storage System

On Monday, frontend cloud development platform Vercel launched a public beta of Vercel Blob, its serverless storage system.

Blob stands for binary large objects and are typically images, audio or other multimedia objects. Sometimes binary executable code is stored as a blob as well. Vercel Blob allows Vercel Pro users to store and retrieve any file with an intuitive, promise-based API.

Designed for the JavaScript and TypeScript frameworks, Vercel Blob allows developers to store and retrieve any file. During its four-month private beta, Vercel created 50,000 blob stores. Users with a Vercel account can have multiple blob stores in a project. Also, each blob store can be accessed by multiple Vercel projects. Vercel Blob URLs are publicly accessible, created with an unguessable random ID, and immutable.

There are plans to support making a Blob private in an upcoming release

Free Software Development Course, Coding Labs

LinkedIn Learning is collaborating with Coder Pad to offer 33 new software development courses and interactive coding exercises for free through Dec. 18. Coders can learn about six languages — Python, JavaScript, Go, SQL, Java and C++. There are six new programming essential courses, which covers the basics of a language, as well as 18 new coding labs or practice environments to hone programming skills in these languages, and nine new advanced courses focused primarily on advanced techniques in the six languages, plus one course on building a generative language model from scratch.

Gradle Changes Name

Developer build tool Gradle Enterprise will now be called Develocity. The reason for this name change is that Gradle, Inc., found the original name created a misconception that Gradle Enterprise was only for the Gradle Build Tool when it actually supports both the Gradle Build Tool and the Apache Maven build system.

The company also recently announced that Develocity supports the Bazel build system, which is an open source project hosted by Google. The company also released beta-level support for sbt, the open source build system popular with the Scala language developer community. The roadmap for Develocity includes plans to support additional software development ecosystems.

The post Dev News: Svelte 5, AI Bot for Android Studio, and GitHub Tools appeared first on The New Stack.

]]>
Developers: Is Your API Designed for Attackers? https://thenewstack.io/developers-is-your-api-designed-for-attackers/ Wed, 20 Sep 2023 18:24:40 +0000 https://thenewstack.io/?p=22718723

When an organization has a security problem with an API, it’s usually one it built internally, according to Jeremy Snyder,

The post Developers: Is Your API Designed for Attackers? appeared first on The New Stack.

]]>

When an organization has a security problem with an API, it’s usually one it built internally, according to Jeremy Snyder, founder and CEO of API security firm FireTail.io.

The security firm analyzed 40 public breaches to see what role APIs played in security problems, which Snyder featured in his 2023 Black Hat conference presentation. The issue might be built-in vulnerabilities, misconfigurations in the API, or even a logical flaw in the application itself — and that means it falls on developers to fix it, Snyder said.

“It’s a range of things, but it is generally with their own APIs,” Snyder told The New Stack. ”It is in their domain of influence, and honestly, their domain of control, because it is ultimately down to them to build a secure API.”

The number of breaches analyzed is small — it was limited to publicly disclosed breaches — but Snyder said the problem is potentially much more pervasive.

“First of all, 83% of all internet requests, if not more, are API requests,” he said. “It’s not the total volume of traffic. It’s the number of requests that are flowing across the Internet day to day, more than four-fifths of all requests are actually API requests, not user-initiated queries.”

In the last couple of months, he said, security researchers who work on this space have uncovered billions of records that could have been breached through poor API design. He pointed to the API design flaws in basically every full-service carrier’s frequent flyer program, which could have exposed entire datasets or allowed for the awarding of unlimited miles and hotel points.

“We’ve seen a few very, very high-profile examples,” he said. “Effectively the entire connected car ecosystem has had API design flaws that could have exposed not only the owners of all of these vehicles, [but that] allows you to update the owner records, allows you to unlock and start these vehicles and drive them away.”

Snyder explained some of the top API problems and outlined best practices developers can use to improve APIs.

Common API Flaws

Indirect Object Reference, or IDR, is a common problem, Snyder said. It allows someone with a legitimate user’s access to manipulate the API request to access another user’s data.

“That is a super common — that may be, on its own, the single number one problem that we see consistently across the data set,” he said.

Another common problem is excessive data exposure, in which the API returns too much data. For instance, a page might have a photo, your name, an address, whatever, and the API sends everything — including personal data. Then the developer relies on either the mobile app or the web browser to hide all the data that wasn’t requested.

“Of course, bad actors don’t play by those rules,” he said. “They’re not going to go through your app or go through your web interface to try to scrape data from your API.”

Developers aren’t doing this on purpose, but mistakes happen when other pressures mount, he added.

“I don’t think any developer sets out to intentionally return too much data or to intentionally build a bad API,” he said. “But I think there’s a trade-off between how quickly I can build something — speed and convenience versus the security and privacy considerations.”

Best Practices to Fix API Flaws

Write a specification. Very few developers start from absolute zero when they’re building an API, Snyder noted. Typically, they’ll use a common open source framework for building that API. Part of that initial work should include a specification file governing how the API should work, he said.

Use common tools. Don’t try to create your own kind of identity and authentication mechanisms, Snyder said. “There’s everything from WebAuthn to single sign-on mechanisms and the less you make yourself build and design around identity, the higher the chances that you could get it right easily by leveraging a proven solution,” he said.

Think about the data. “Think about designing your API in a way that doesn’t expose too much and also is like checking an authorization for each data request,” Snyder suggested. Sometimes, developers push that authorization check to the frontend on a mobile client or Internet of Things device. In one famous case, the authorization was happening inside the logic of a Peloton exercise bike. “Again, you know, hackers don’t play by those rules, so they went straight to the Peloton API using the scripting language,” he said. “They just started manipulating authorization requests and they were able to extract about 3 million records.”

The post Developers: Is Your API Designed for Attackers? appeared first on The New Stack.

]]>
How Frontend Devs Can Take Technical Debt out of Code https://thenewstack.io/how-frontend-devs-can-take-technical-debt-out-of-code/ Tue, 19 Sep 2023 15:06:23 +0000 https://thenewstack.io/?p=22718551

Technical debt can take a variety of forms. It can look like bugs left in coding or coding practices that

The post How Frontend Devs Can Take Technical Debt out of Code appeared first on The New Stack.

]]>

Technical debt can take a variety of forms. It can look like bugs left in coding or coding practices that vary by developer within the same department.

Technical debt is anything that can cause additional work or rework because it wasn’t done right the first time. Sometimes developers write code that functions extremely well on one machine, but when it’s deployed into a distributed environment, it fails — that’s also part of technical debt, said Shashank Purighalla, a former web developer who is now founder and CEO of BOS Framework, a cloud infrastructure and DevOps automation platform.

“At a very high level, the business level, you can talk about intentional technical debt, which almost every programmer [and] every dev team takes on because of time constraints, budgetary constraints,” Purighalla said. “There’s also a lot of unintentional or unintended technical debt, which people just do not know that they’re taking on — because of lack of knowledge, because of limitations in terms of their awareness of the overall ecosystem, because of some sort of a siloed view.”

Frontend and web application developers can help resolve technical debt, Purighalla told The New Stack. But first, they have to know what technical debt looks like.

Understanding Technical Debt

Developers can identify technical debt in a variety of ways, starting with that most annoying of technical debts: Fixing bugs in the code. But there are other indicators, he said.

“A senior developer, in many cases, is capable of looking at code and saying, ‘I see certain constructs that have been poorly done, or there are certain implementations that may be suboptimal,’” Purighalla said. “Any[thing] from identifying bugs in the system, to unfinished code, to poor implementations, and — rising up a little bit to the ecosystem analysis — security constructs that are missing or certain protocols that are not done properly.”

Evidence of software technical debt can be seen in the rise of cyberattacks over the past three years, he said.

“This is a consequence of technical debt, and I call it unintentional technical debt in many cases, because the technical team that’s working [on it] or has introduced it or has taken over that program is not even aware that there are all of those problems,” he said.

Think Full Stack, Act Frontend

To combat technical debt, developers — even frontend developers — must see their work as a part of a greater whole, rather than in isolation, Purighalla advised.

“It is important for developers to think about what they are programming as a part of a larger system, rather than just that particular part,” he said. “There’s an engineering principle, ‘Excessive focus on perfection of art compromises the integrity of the whole.’”

Shashank Purighalla, founder and CEO of BOS Framework

Shashank Purighalla, founder and CEO of BOS Framework

That means developers have to think like full-stack developers, even if they’re not actually full-stack developers. For the frontend, that specifically means understanding the data that underlies your site or web application, Purighalla explained.

“The system starts with obviously the frontend, which end users touch and feel, and interface with the application through, and then that talks to maybe an orchestration layer of some sort, of APIs, which then talks to a backend infrastructure, which then talks to maybe a database,” he said. “That orchestration and the frontend has to be done very, very carefully.”

Frontend developers should take responsibility for the data their applications rely on, he said. For instance, frontend developers should be aware that there are roughly five types of data that developers ultimately present or capture from the interface:

  • Confidential data;
  • Highly confidential data;
  • Restricted data;
  • Internal data; or
  • Public data.

These five types of data have different requirements, based on how it’s being captured and then put back into the database or, conversely, how it’s being fetched from a database and presented in the interface, he said.

“The types of interfaces are also very important when we talk of frontend web applications,” he said. “Today, especially in the AI world, you’re not just talking of data being slapped on a screen or painted on a screen. You’re talking of a highly interactive system that could be natural language processing driven. So based on that, how it is being captured is very, very important.”

For instance, frontend developers need to know when to use encryption, a CAPTCHA, or a registration form.

“It’s important to understand that there’s also a lot of liability today that goes into development,” he added. “What developers do not understand directly is how their decisions could impact the organization and their leadership in many, many cases.”

Standards for All Developers

To start reducing technical debt, development teams should adopt coding standards with which every developer complies, he added.

“At a very basic minimum, you’re thinking about nomenclature,” Purighalla said. “How are variables being declared and how are they being named? Public variables, global variables, private variables.”

Development teams also should adopt test-driven development, he advised. In test-driven development, unit test cases are created before developing the actual code.

“At a very minimum, test-driven development is a very good strategy to reduce the number of obvious bugs in the functionality and the user usability itself,” he said. “So requirements are viewed not as a checkbox that has to be checked off, but as a part of an outcome that has to be achieved.”

Test-driven development creates a mind shift that supports thinking about code from a functional code integrity or code completeness standpoint, he added.

The frontend also has to think about whether the web applications are being developed for some internal purpose or as a SaaS application for public consumption, he added. There may be compliance concerns around HIPAA, SOC 2 or other regulations, he added. That combined with concerns about data and security should guide developers.

“Those determine the types of standards that have to be followed, certain basic principles in terms of code scans, code coverage, and security scans [that] must be done at a certain periodicity,” he said. “Either this is a static code analysis that’s done, or it is done in every single deploy cycle.”

Good practices must be geared toward ensuring readability, he added, and there must be proper, in-line documentation. That could be as simple as developers adding comments about who is developing it, when is it being written, why is it being written, what requirements exist, and what is the purpose, he said. Comments should also indicate whether there is a deeper design document or sequence diagram of some sort reference in the project.

“The absence of this is why we have the number of cyber breaches — I cannot over-emphasize that,” he said. “It’s so easy if you have the choice of tech stack sometimes, right? If you were to go with the frontend, with just an interpreted language versus a compiled language, let’s say PHP, it’s so easy to find your way through and then start hacking a system. It doesn’t take very long, even if there’s a small vulnerability. If you were to go with a basic compiled piece of tech, the chances of somebody doing that, if it’s done well, are lowered greatly.”

In addition, developers across the organization should all follow the same standards for these practices, he added.

“Developers must understand that they’re a part of a larger ecosystem, and building a piece that works in the overall picture,” he said. “Understanding everything from the business viewpoint, and then working backward for that business outcome, which could include certain security constructs I would not program for.

The post How Frontend Devs Can Take Technical Debt out of Code appeared first on The New Stack.

]]>
Dev News: A ‘Nue’ Frontend Dev Tool; Panda and Bun Updates https://thenewstack.io/dev-news-a-nue-frontend-dev-tool-panda-and-bun-updates/ Sat, 16 Sep 2023 11:00:22 +0000 https://thenewstack.io/?p=22718261

A new minimalistic frontend development toolset called Nue.js launched Wednesday. It’s an alternative to React, Vue, Next.js, Vite, Svelte and

The post Dev News: A ‘Nue’ Frontend Dev Tool; Panda and Bun Updates appeared first on The New Stack.

]]>

A new minimalistic frontend development toolset called Nue.js launched Wednesday. It’s an alternative to React, Vue, Next.js, Vite, Svelte and Astro, said frontend developer and Nue.js creator Tero Piirainen when introducing it on Hacker News. It’s designed for websites and reactive user interfaces, he further explained in the Nue.js FAQ. The toolset has been open sourced under the MIT license.

“Nue ecosystem is a work-in-progress and today I’m releasing the tiny, but powerful core: Nue JS,” he wrote on Hacker News. “It’s an extremely small (2.3kb minzipped) JavaScript library for building user interfaces.”

Nue comes from the German word neue, which translates to “new” in English. It allows developers with knowledge of HTML, CSS and JavaScript to build server-side components and reactive interfaces. It’s like React or Vue, but without hooks, effects, props, or other abstractions, he added.

Nue vs React

React vs Nue (according to Nue)

The Nue.js website boasts that it can build user interfaces with 10x less code, presumably when compared with competitors (but that wasn’t specified). It’s designed to be part of an ecosystem, with plans to include:

  • Nue CSS for cascaded styling to replace CSS-in-JS, Tailwind and SASS;
  • Nue MVC, for building single-page apps;
  • Nue UI for creating reusable components for rapid UI development;
  • Nuemark, a markdown flavor for rich and interactive content; and
  • Nuekit for building websites and web apps with less code

Piirainen, who hails from Helsinki, has more than 25 years of experience building open source projects, technology products, and startups. Previous projects Piirainen has coded include Riot.js, Flowplayer, and jQuery Tools. He is currently the sole developer on Nue.js, but is seeking contributors.

Panda Updated

Panda, the popular Python library, released version 2.1.0 this week. Panda is a data analysis and manipulation library built on top of NumPy, which is a library for scientific computing. This update includes a number of enhancements:

  • Avoid NumPy object type for strings by default;
  • DataFrame reductions preserve extension dtypes;
  • Copy-on-Write improvements;
  • A New DataFrame.map() method and support for ExtensionArrays; and
  • New implementation of DataFrame.stack()

Panda also plans to make PyArrow a required dependency with Panda 3.0. Among the listed benefits are the ability to:

  • Infer strings. PyArrow backs strings by default, “enabling a significant reduction of the memory footprint and huge performance improvements,” the post stated.
  • Infer more complex dtypes with PyArrow by default, such as decimal, lists, bytes, structured data and more.
  • Improve interoperability with other libraries that depend on Apache Arrow.

The group is looking for feedback on the decision.

Node.js Release 20.6.0

Node.js released Node.js 20.6.0 last week, with the big change being that it now offers built-in .env file support for configuring environment variables. The change also allows developers to define NODE_OPTIONS directly in the .env file, eliminating the need to include it in the package.json, the release note stated.

There’s also a new API register on node:module to specify a file that exports module customization hooks, passes data to the hooks, and establishes communication channels with them.

“The ‘define the file with the hooks’ part was previously handled by a flag –experimental-loader, but when the hooks moved into a dedicated thread in 20.0.0 there was a need to provide a way to communicate between the main (application) thread and the hooks thread,” the release note stated. “This can now be done by calling register from the main thread and passing data, including MessageChannel instances.”

The JavaScript runtime is used to develop web applications, real-time applications, and command-line tools.

Bun Update Addresses Bugs

Bun released last week. This week, creator Jarred Sumner posted that Vercel has added Bun install support and Replit added Bun support. Ruby on Rails also added Bun support and Laravela Sial now installs Bun by default. There’s also a Typescript web framework that runs on Bun called Elysia.

All is not perfect in Bun world, however, and the bug reports are starting to role in, with 1,027 bugs reported on the new runtime. To be fair, a good portion of those go back to Bun’s early days, but around 400 bugs have been filed since its 1.0 release. Bun v1.0.1, posted Tuesday, addressing some of these problems.

Free Prompt Engineering Course for Web Developers

Developer education platform Scrimba is offering a free prompt engineering course for web developers. Before taking the course, it’s recommended that developers have a basic understanding of HTML, CSS, JavaScript and React. It’s taught by Treasure Porth, a software engineer who has taught code since 2015. The three-hour course focuses on creating prompts, AI-assisted coding, and using AI large language models for job searches.

The post Dev News: A ‘Nue’ Frontend Dev Tool; Panda and Bun Updates appeared first on The New Stack.

]]>
Self-Hosted CDEs Preferred to SaaS in Large Orgs, Says Coder https://thenewstack.io/self-hosted-cdes-preferred-to-saas-in-large-orgs-says-coder/ Thu, 14 Sep 2023 18:45:52 +0000 https://thenewstack.io/?p=22718232

Coder, the self-hosted “Cloud Development Environment” (CDE) has just announced version 2.0 of its product, which includes new Dev Container

The post Self-Hosted CDEs Preferred to SaaS in Large Orgs, Says Coder appeared first on The New Stack.

]]>

Coder, the self-hosted “Cloud Development Environment” (CDE) has just announced version 2.0 of its product, which includes new Dev Container support and integration with JFrog’s artifact repository. To discuss the latest in Coder.com, I spoke to co-founder and CTO Kyle Carberry and new CEO Robert Whiteley.

When it comes to CDEs, SaaS products like GitHub Codespaces seem to be the standard in this market — in other words, not self-hosted. So I asked the Coder pair why a developer would want to go the self-hosted route.

Carberry replied that Codespaces “prescribes the way that someone writes software,” whereas Coder is an “enterprise abstraction where there’s a maximum flexibility.” He added that on Coder, you can “bring whatever you want and make the development environment” using your chosen coding tool.

Why Self-Hosted CDEs are Increasing in Popularity

I noted that I’d recently reported on the launch of Daytona, which is also a self-hosted CDE (although its chosen acronym is SDE, which stands for “standardized development environment”). Neither of the Coder executives was familiar with Daytona, because it’s so new. But Whiteley had an interesting perspective on why self-hosted CDEs (or SDEs) are trending now, particularly in the enterprise market.

“I think what we’re seeing is a second generation of cloud development environments, or CDEs. We’re going to see more self-hosted or deployable. So Daytona’s a good example of a company that’s coming up. I wouldn’t be surprised if some of the SaaS-only versions actually ended up reversing course and having a deployable version as well.”

It’s a good point; and in fact, in the conclusion of my Daytona article, I’d questioned whether GitHub Codespaces will also offer self-hosting in due course.

Coder

Coder and Jfrog

“Mostly what customers have expressed is, look, this is an early adopter market,” said Whiteley, regarding CDEs. “Early adopters tend to be very sophisticated and they need the ability to have control over the environment, to twist and turn the nerd knobs, so to speak. And so I think SaaS takes that away. And to Kyle’s point, it’s overly prescriptive on how you code, where you code, when you code. So I do think — by the way, huge fan of SaaS — I think it will be the mainstream part of this market in, you know, 12-18 months.”

He’s referring to CDEs in the enterprise here, because in the consumer market (individual developers), products like GitHub Codespaces and Replit are already much more popular than Coder. But what Whiteley is saying is that “early adopter” companies are more interested in self-hosted CDEs.

Security Is #1 Reason to Self-Host (But There Are Emerging Reasons)

This begs the question: what kinds of companies use Coder currently? And is it security that is top of mind for them when choosing to self-host their CDE?

Whiteley confirmed that security is “by far” the biggest factor, particularly for large enterprises.

“So the value of cloud development environments, in general, is [that] I’ve essentially shifted development from local workstations to a cloud-hosted workspace of some kind. And so inherently, my development is now ‘behind the firewall’, right, so my source code is not on-prem or on a laptop. I can put access controls in place, I have better discoverability […] of what the developer is working on.”

However, he noted that there are emerging use cases for self-hosted CDEs, other than security.

“The one thing about CDEs is they do in some cases require a behavioral change from development,” he explained. “It used to be entirely remote. Now part of the solution can be remote, maybe your IDE is remote, but all of the actual coding practice is now centralized in a cloud.”

He added that “if you’re using AI or ML, you’re probably coding in the cloud already, because you need access to exotic GPUs or you have some large dataset that you’re trying to train a model on.” So, AI-based developer use cases are incentivizing companies to move to CDEs.

Large companies also choose self-hosted CDEs because it’s more cost-effective, according to Carberry. Some of its customers already use Kubernetes, as one example, so they can put Coder into that environment, which Carberry says is a lot less expensive than paying a SaaS provider to host your CDEs.

“It’s one of the reasons that we actually don’t prescribe any infrastructure,” Carberry added. “Some of our customers are extremely happy with, like, VMs. And with Coder, they provision a VM for each developer and it automatically shuts off when they’re not using it. Some of our customers use Kubernetes, some of them […] run OpenShift and then maybe develop inside of there. So it’s kind of like a hodgepodge, I would say, but the biggest lesson is that we can’t really prescribe anything […] particularly to large enterprises.”

Dev Containers

From a developer perspective, perhaps the most interesting thing about Coder 2.0 is its enhanced support for Dev Containers, a Microsoft-developed open standard that “allows you to use a container as a full-featured development environment.”

Coder

Carberry said that previously Coder supported dev containers as “a second class citizen,” but in 2.0 the product offers “Envbuilder for Dev Containers,” which is an open source project by Coder based on the Microsoft Dev Containers specification. “Envbuilder enables users to control their development environments without affecting infrastructure or requiring the work of DevOps and Platform teams,” stated Coder in its announcement of 2.0.

Whiteley added that dev containers is “an emerging spec,” and that “we were sort of dragged here by customers — some of our largest customers were doing dev containers, wanting to make it part of their development standard.” But he said that even if the dev containers spec ultimately doesn’t work out, Coder doesn’t rely on it (it’s just an option) and so its product won’t be impacted.

The post Self-Hosted CDEs Preferred to SaaS in Large Orgs, Says Coder appeared first on The New Stack.

]]>
Web Dev Platform Netlify Releases Software Development Kit https://thenewstack.io/web-dev-platform-netlify-releases-software-development-kit/ Thu, 14 Sep 2023 15:35:11 +0000 https://thenewstack.io/?p=22718154

Web development platform Netlify released a software development kit (SDK) Wednesday that it said will make it easier for tech

The post Web Dev Platform Netlify Releases Software Development Kit appeared first on The New Stack.

]]>

Web development platform Netlify released a software development kit (SDK) Wednesday that it said will make it easier for tech partners and customers to design custom integrations with Netlify.

“The SDK is exciting to me because it opens up for partners and the other tool makers to integrate into Netlify and enterprise companies to build integrations, specific to their services on Netlify, from the beginning,” CEO Matt Biilmann told The New Stack.

Netlify offers serverless backend services for web applications and dynamic websites. The SDK supports taking a composable architecture approach to web applications and websites at scale, Biilmann said.

“We coined the term Jamstack and pioneered this whole idea of building decoupled web UIs that talk to all these different APIs and services,” he said. “Now that’s maturing into this idea of composable architectures at scale, where you combine together many different tools instead of buying one big monolithic tool.”

Netlify Connect, which was released in June, plays a role in that, he added. Netlify Connect allows developers to integrate content from multiple sources into a single data unification layer for access through a GraphQL API, according to the documentation. That allows data updates to sync automatically. The SDK includes connectors to support connecting to and syncing data from a custom data source in Netlify Connect.

SDK Simplifies Flows, Authentication and Connectors

The SDK also will simplify flows, OAuth authentication and connectors, Biilmann told The New Stack.

“The connector part of the SDK allows partners or internal developers to build their own connectors and define ‘here’s a connector’ for SanitySitecore or Adobe Experience Manager, or as a large company, ‘here is a connector to our internal product catalog.’ Once that connector is defined, any team building with it can simply install it, get data into Netlify Connect and start building on top of it,” he said.

Already, partner companies have deployed connectors using the SDK. For example, the MySQL platform PlanetScale created an integration that allows joint customers to deploy data-intensive applications without worrying about the underlying infrastructure or issues with data scalability.

It also incorporates a build event handler, which is a function that is called during the build process. For instance, performance monitoring firm Sentry has built a connector that sends all the source maps from the build through Sentry, by leveraging the SDK’s build event handlers.

“Now if there is an error in your frontend, it will be reported to Sentry and Sentry can use the source maps to tell you exactly where in the code it happened,” Biilmann said. “The build event handler will allow an integrator like Sentry to orchestrate all of that so when you install the Sentry integration, they can see from now on in your build.”

Previously, third-party integrations were handled by plug-ins written as NPM modules, he explained.

“There was no real control over the UI installation experience and those pieces and other parts of it,” Biilmann said. “If you wanted to do all our flows and so on, we had to do custom work together with a partner.”

Support for Enterprise Software Integration

The SDK also incorporates API handlers and an integration UI.

“The integration UI gives you a declarative way of building the UI for your integration within Netlify,” he said. “The API handlers allow you to use Netlify itself to build the backend for that UI, because, obviously, you probably need a backend that has access to the right secrets, that can talk to Sentry’s API, talk to Netlify’s API and make everything fit together. That’s part of the SDK.”

The SDK allows developers to define what should happen at build time, what should be injected into the runtime code, what path should be a connector, how the UI should look and what the API handlers should be to make that UI actually function and work, he added. For instance, with Sentry’s integration, developers can click OAuth to do an OAuth flow in the associated Netlify project.

It also allows enterprises to create their own integrations with their own partner software. Enterprises will “almost certainly” have off-the-shelf software they’re using and want to connect to, he said.

“They’ll almost certainly also have a bunch of internal APIs and services that they want to make reusable for their UI teams, and that’s why the SDK is also really the toolkit that they can use to build private integrations that are not publicly shared with any other Netlify team, but within their organization,” he said. “[That] can be how they make reusable building blocks that a web developer can simply come in, click through some options to install, and now they’re off to the races.”

The post Web Dev Platform Netlify Releases Software Development Kit appeared first on The New Stack.

]]>
Dedicated IDE for Rust Released by JetBrains https://thenewstack.io/dedicated-ide-for-rust-released-by-jetbrains/ Wed, 13 Sep 2023 16:24:36 +0000 https://thenewstack.io/?p=22718062

JetBrains today launched an integrated development environment for the Rust programming language, called RustRover. Previously, the company provided only its

The post Dedicated IDE for Rust Released by JetBrains appeared first on The New Stack.

]]>

JetBrains today launched an integrated development environment for the Rust programming language, called RustRover.

Previously, the company provided only its IntelliJ Rust plugin for Rust. Other plug-ins for Rust include Rust-analyzer and RLS. There are also text editors that support Rust, but this is the first dedicated Rust IDE.

IDEs typically include a code editor, debugger, compiler, and other features to help developers write, test and deploy software. A dedicated IDE is an important milestone in the maturity of a programming language, said Vitaly Bragilevsky, developer advocate for RustRover.

“From our point of view, that [plug-in] was enough but then we felt that something has changed in the ecosystem, in the community,” Bragilevsky told The New Stack. “The state of the community persuaded us that [we] really need it.”

One trend JetBrains noticed is that the Rust community is expanding: Bragilevsky said JetBrains’ research from mid-2022 found that 56% of the Rust developers surveyed had started using Rust in the prior six months. SlashData put the community at around 3.7 million developers in the State of the Developer Nation released in May 2023, which itself was a 68% year-over-year increase.

Many come to Rust from the JavaScript and Python communities, Bragilevsky added.

“Those folks may be a bit unhappy about their previous programming languages,” he said. “Maybe they don’t have enough performance, and they can get that performance with Rust. Sometimes they don’t have enough safety. And Rust provides that for sure. So they basically want to try something new, which gives more opportunities in what they need.”

Why a Dedicated IDE?

JetBrains takes an unusual approach in offering specialized IDEs that work with specific programming languages and technologies. For instance, it offers GoLand for Go, WebStorm for JavaScript, and RubyMine for Ruby. Zend is another example of a specialized IDE, in that case for PHP. However, although it is designed for Rust, the IDE can be used for other languages.

IDEs allow developers to work from one space, Bragilevsky explained. “You can work with databases in the same software — you can issue HTTP requests, for example. So you can do many things just besides writing the code; and the level of support for that writing also can be more powerful if you have an IDE, because text editors are usually limited in what they provide to their users.”

Frontend Support

Though Rust is primarily a backend language, RustRover also provides support for frontend technologies and databases. Specifically, that means developers can build a range of applications without the need for other tooling. For instance, it provides the ability to see what’s going on with a database from within the IDE to check.

For example, [web developers] implement web backends and Rust is becoming quite popular in this area,” Bragilevsky said. “You can just launch RustRover and then you can do some web development, like HTML. You can write styles for that page. You can do what you want. So it’s, once again, an integrated experience.”

Additional Features

The RustRover includes real-time feedback and debugging. It also includes permissive completion and parsing, which provides code suggestions when developers make errors. This is not a code-suggestion tool, like Copilot, but it does rely on algorithms to recommend code corrections if there is a mistake, Bragilevsky explained.

Among the additional features RustRover incorporates are:

  • Team collaboration;
  • Advanced unit testing integration, which allows developers to conduct testing, rerun failed tests, and resolve errors;
  • Rust toolchain support, including Rust compiler;
  • Full version control system integration, with built-in GitHub and Git support, with version control for teams.

There isn’t a public roadmap for RustRover and Bragilevsky would not comment on what future rollouts might include.

“When you develop an IDE, you never have a stopping point,” he said.”There are always features that should be implemented. And once you have a lot of features, developers usually want more.”

RustRover can run on Windows, Mac OS and Linux. It’s available for free while in early access program (EAP). While RustRover is available in the EAP, JetBrains will keep the plugin version compatible with IDEA Ultimate and CLion.

The post Dedicated IDE for Rust Released by JetBrains appeared first on The New Stack.

]]>
How Attackers Bypass Commonly Used Web Application Firewalls https://thenewstack.io/how-attackers-bypass-commonly-used-web-application-firewalls/ Wed, 13 Sep 2023 15:31:35 +0000 https://thenewstack.io/?p=22718098

Cloud-based web application firewalls (WAFs) sport an impressive array of protections. Yet many hackers claim they can easily bypass even

The post How Attackers Bypass Commonly Used Web Application Firewalls appeared first on The New Stack.

]]>

Cloud-based web application firewalls (WAFs) sport an impressive array of protections. Yet many hackers claim they can easily bypass even the most sophisticated WAFs to execute attack queries against protected assets with impunity.

The threat research team atNetScaler, an application delivery and security platform, found that many cloud-based WAFs can be readily circumvented. If you have committed to paying for a WAF service, you need to run tests to ensure that your WAF can do — and is doing — what it’s supposed to do to protect your applications and APIs.

If you take away nothing else, I implore you to run some easy tests against your environment to check that your WAF service is protecting optimally. At the end of this article, I’ve outlined a few simple but often-overlooked steps to help you identify if someone is already bypassing your WAF and compromising the security of your web applications and APIs. But first, let’s look at the most common ways that attackers get around WAF defenses.

The Most Common WAF Attacks

Cloud-based and on-premises WAFs are security solutions delivered as a service that helps protect web applications and APIs from a variety of attacks that are documented by the Open Web Application Security Project (OWASP). The most common WAF attacks include:

Injection

When it comes to robbing a ton of data through a keyhole like a web application, then SQL injection is the way to go. Injection attacks were first documented more than 25 years ago and are still commonly used today.

The beginning of a database query is often designed to retrieve all information, followed by a filter to only show one piece of information. For example, a commonly used query is one that initially retrieves all customer information but then filters for a specific customer ID. The database executes this command against every line in the table and will return the requested information on the table row(s) where this statement is true. Usually, this is one single row. Attackers manipulate the form fields that are used to populate such queries to insert database commands, resulting in a statement that evaluates to true for every row in the table, which returns the contents of the entire table in the response. In an ideal world, developers would always secure their forms, so injection attacks would not be possible. However developers can be prone to error on occasion, so not all form fields are protected all of the time.

The latest OWASP Top Ten list now includes cross-site scripting in its injection category. Cross-site scripting is where attackers insert their scripts into your website or your web URLs so that unsuspecting victims execute them in their browsers, allowing attackers to transmit cookies, session information, or other sensitive data to their own web servers.

Broken Access Control

Broken access control allows an attacker to act outside of the intended expected behavior of the application or API developer. This vulnerability can lead to unauthorized information disclosure, modification or destruction of all data, and the ability to perform a business function outside the user’s limits, with some exclusive to APIs.

OWASP recently raised the criticality of broken access control to number 1 on its top 10 list of web application vulnerabilities. The reason for its newfound importance lies in the fact that this vulnerability category is especially applicable to APIs — a relatively new vector compared to web applications, which have been around for a long time. Attackers find APIs and attempt to exfiltrate information from them. And because APIs are not designed for human input, the same sort of validation inputs and checks used for web applications may not be top of mind for developers. Sometimes APIs are published without the knowledge of the security and operations teams.

Vulnerable and Outdated Components

Whenever a new vulnerability is found in a commonly used component, it results in a massive spate of bot-generated traffic scanning the internet, looking for systems that can be compromised. If you set up a web server and make it available to the internet, you will quickly see log entries for requests made to specific types of applications that do not exist on your newly created web server. This activity is simply the hacker network casting a wide net looking for vulnerable servers to harvest.

The primary function of a WAF is to examine the contents of an HTTP request — including the request body and request headers where the attack payloads are located — and decide if the request should be allowed or blocked. Some WAFs will also inspect responses to assess if there is an unauthorized leaking of data or sensitive information. They will also record the response structure (a web form or cookies, for example), which effectively ensures that subsequent requests are not tampered with.

The 3 Types of WAFs

Web application and API firewalls generally come in three models: negative, positive, and hybrid:

  • The negative security model uses simple signatures and is pre-loaded with known attacks that will block a request if there is a match. Think of this as a “deny list.” In other words, the default action is “allow” unless it finds a match.
  • The positive security model is pre-loaded with a pattern of known “good” behavior. It compares requests against this list and will only allow the request through if it finds a match. Everything else gets blocked. This would be considered an “allow list.” In this case, the default action is “block” unless it finds a match. The positive security model is considered much more secure than the negative security model — and it can block zero-day attacks.
  • The hybrid security model uses signatures as a first pass and then processes the request to see if it matches the allow list. You would be correct to ask, “Since an attack would not be on the allow list, why use an allow list?” The reason why is that less processing is required with the negative security model that uses signatures to block requests vs. processing everything through the positive security model. More processing equates to larger WAF appliances or to higher costs for cloud-based hosting.

All three WAF security models have one thing in common: They examine the inbound request and look for threats. The effectiveness of request-side examination depends on what the WAFs are looking for and how granularly they inspect the request payload.

How Attackers Take Advantage of WAF Limitations and IT’s Lack of Due Diligence

Attackers are aware that looking for attacks in traffic is computationally expensive for most organizations, and that commercial inspection solutions are designed to match real-world use cases as efficiently as possible. They know that real-world HTTP(S) GET or POST requests are usually only a few hundred bytes, maybe 1-2 kilobytes with some big cookies.

And attackers know that many WAF solutions will only scan a small, finite quantity of bytes for a request when looking for that Bad Thing. If WAFs don’t find it there, or if the request is bigger than 8 kilobytes as per NetScaler’s testing, many WAFs will not scan the request. They will consider it an anomaly and simply forward it on. I’ll say that again: Many WAFs simply forward the request with no blocking and no logging.

Wow.

The WAF ‘Hack’ Explained

To bypass WAFs, attackers leverage SQL injection or cross-site scripting and pad out the request with garbage to get it past the 8-kilobyte size and then hit send. Padding a request can be as simple as adding a huge header or cookie or other POST body text in the case of a login form. The request is not scanned and is passed through to the backend server where the payload is executed.

Some WAFs can be configured to counter padded attacks, but this protection is not turned on by default. Speculating as to why this is so, I can only arrive at the conclusion that turning on such protection requires extra processing, which drives up costs for WAF users. Not wanting their WAFs to be perceived as more expensive than their competitors, vendors leave additional protections disabled. Be aware that your web applications and APIs are fully exposed if you don’t change the default setting.

A single-pass WAF architecture that is available with a WAF solution like NetScaler performs miles better than traditional proxy strategies, which is why NetScaler can enable the protections against padded attacks out of the box without the added costs.

Are These WAF Vulnerabilities New?

Padded attacks are not new, and WAF vendors are well aware of the issue. But the WAF industry as a whole has not addressed the need for the most effective protection to be turned on by default.

Some analysts have communicated this gap in security with the vendors in question, with the vendor responses being along the lines of, “This is a known and documented limitation, and customers should apply this specific rule if they want this protection.” But the workaround is often buried in the nuts and bolts of the WAF configuration guide, and admins and deployment operators can (and do!) miss it.

In today’s world, where things need to “just work” when turned on and where there is the expectation that every solution used by IT will simplify tasks and reduce administrative overhead, WAFs need to be secured from the start. Sure, if a legitimate request needs to be bigger, then it will be blocked. That’s where exceptions can be made, and admins are aware of the risk when they do so. But leaving an entire site exposed should never be a consideration.

Attackers know that many WAFs do not have protections turned on by default, which is why they take advantage of this vulnerability with padded attacks. A couple of the WAFs that NetScaler tested were not vulnerable to this attack method, but many were. Some WAFs had slightly larger request limits (128 kilobytes) but were just as easy to bypass once the body was padded out. Some solutions favor this “fail open” approach to avoid additional costs resulting from extra processing, to prevent unexpected false negatives, and to allow for a more simplified — though less secure — setup.

However, the “fail open” approach violates the “strong defaults” principle of cybersecurity that we should expect from security vendors. When choosing a WAF, you need to ensure that you are protected out of the box against padded attacks.

The Takeaway: 3 Simple Steps to Securing Your WAF

Your WAF solution may not be correctly configured, leaving your web applications and APIs completely exposed to attackers who can easily deploy padded attacks via SQL injection and cross-site scripting.

As you race off to check your WAF configuration, here are your three must-dos:

  • Test your web applications (both internal and external) with padded requests.
  • Examine web application logs for large request sizes where they are not expected: For example, look at a login POST form that typically contains just a username and password and ranges in size from approximately 20 to 300 bytes. If you see POST requests that are greater than 8 kilobytes in size, then this may be a padded attack attempt.
  • Evaluate whether you can make a configuration change that will mitigate padded attacks and, if you can, make sure to compare the before-and-after costs so that you get an accurate cost for the added protection.

By following this simple guidance, you can correctly configure your WAF to improve the security of your web applications and APIs.

The post How Attackers Bypass Commonly Used Web Application Firewalls appeared first on The New Stack.

]]>
Bun 1.0 Ships as Node.js and Deno Alternative https://thenewstack.io/bun-1-0-ships-as-node-js-and-deno-alternative/ Mon, 11 Sep 2023 19:55:15 +0000 https://thenewstack.io/?p=22717915

One of the hardest things about shipping the 1.0 version of Bun, creator Jarred Sumner shared via X (Twitter), was

The post Bun 1.0 Ships as Node.js and Deno Alternative appeared first on The New Stack.

]]>

One of the hardest things about shipping the 1.0 version of Bun, creator Jarred Sumner shared via X (Twitter), was removing the frontend server that was part of the beta.

“The thing I wish we had more time for is making Bun good for frontend development,” Sumner said during an X (Twitter) Q&A after the Thursday launch. “It’s not bad for it — you can use […] the tools you already use. But I still think there’s a big opportunity there for something where the runtime is directly integrated with the frontend build tools.”

That said, no one responding about Bun seemed to care after its release. Feedback from users made it clear that it was fine to remove the frontend server, he noted, and the majority of the responses to this news on social media was positive. By Friday, the buzz over Bun was all about its speed and ease of use.

Built for Speed

Bun competes with Node.js and the Rust-based Deno, which were both created by Ryan Dahl. In fact, it’s designed to be a drop-in replacement for Node.js, according to a release livestream that aired Thursday. Bun writes three times faster than Node.js and reads files up to three times faster, the team said during the livestream. Ashcon Partovi, product manager at Oven, the company that created Bun, addressed Bun runtime.

“There are a lot of tools in the Bun toolkit,” Partovi said. “But the crown jewel is the Bun runtime. Bun is a drop in replacement for Node.js that’s backward compatible, and can run Typescript and TSX files, no dependencies necessary.”

He added that Bun can replace any npm run command with a Bun run instead, with npm taking about 150 milliseconds to start running a script on a MacBook Pro. That’s compared to 30 milliseconds for Bun, he said.

“Npm feels noticeably laggy. Whereas Bun feels instantaneous,” Partovi said.

Bun gets a boost from using JavaScriptCore from WebKit, which is known for being exceptionally fast, according to full-stack developer Shalini Tewari, who shared her thoughts about the speed via X (Twitter).

“Node.js, Bun and Deno are all server-side js runtimes, but they have completely different goals.

Choosing between Bun and Node.js depends on your project’s needs,” Tewari suggested. “If you need speed and a straightforward, lightweight experience, go for Bun. If you want a broader ecosystem and community support with lots of tools, Node.js is a solid choice. You can even use both to make your JavaScript applications strong and efficient.”

Benchmarking Runtimes

James Konik, a software engineer with the developer security platform Snky, recently compared the three runtimes and found Bun outperformed both Node.js and Deno.

“Powered by Zig, its aim is to be an all-in-one runtime and toolkit with a focus on speed, bundling, testing and compatibility with Node.js packages,” he wrote.”One of its biggest draws is its performance, which is demonstrably faster than both Node.js and Deno. This makes it a very attractive proposition if it can deliver on all of that.”

He noted that the Bun maintainers provided an example benchmark running an HTTP handler that renders a server-side page with React. Bun handled about 68,000 requests per second compared to about 29,000 and 14,000 for Deno and Node.js, respectively.

In his own tests of an earlier version of Bun, Konik found Node.js handled 21.29 average queries per second, while Deno rated 43.50. Bun handled 81.37 average queries per second.

“In another comparison between Node.js, Deno and Bun, Bun is the fastest to handle concurrent connections. Its requests per second are quite higher too,” Konik wrote. “For instance, with 10 concurrent connections, Bun achieves 110,000 requests per second while Node.js achieves 60,000 and 67,000 for Deno.”

It’s worth noting that a different comparison found Deno and Bun performed very similarly.

Of course, speed isn’t the only factor to consider in a runtime. In a Deno discussion, developer markthree pointed out each runtime had its strengths.

“Bun is more concerned with performance, so it is much better than the other two runtimes in terms of performance right now,” he wrote. “Deno is synonymous with safety, in my opinion, I can safely use packages from the community without worrying about them doing things to my system that I don’t know about. Node is now starting to make a big push on performance and security, too.

“Competition is good, js runtime is starting to evolve,” he added.

More to Come from Bun

That said, Bun is still a work in progress. For instance, Bun Install is ready for Linux and Mac OS but the team was struggling to get the Windows version working, Sumner revealed during an X (Twitter) Q&A held after Thursday’s release. Bun provides a limited, experimental native build for Windows. At the moment, only the Bun runtime is supported, according to the documentation.

“Bun Install will probably be two weeks later is my guess,” Sumner said. “And this is going to be super unoptimized in the first release for Windows. It’s going to take some time before it actually is fast. “

In addition to the runtime, Bun has baked in features that will make developers’ lives easier, said Sumner, such as:

  • Support for both common JS and ES modules
  • Support for hate reading using –hot server.ts
  • A plug-in API that lets developers define custom loaders

“You can extend the Bun runtime to support things like .Yaml imports,” he said. “It uses an API that is inspired from ES build, which means many ES build plugins just work in Bun.”

The post Bun 1.0 Ships as Node.js and Deno Alternative appeared first on The New Stack.

]]>
Dev News: Best States for Web Devs, Slint Adds Rust Support https://thenewstack.io/dev-news-best-states-for-web-devs-slint-adds-rust-support/ Sat, 09 Sep 2023 11:00:17 +0000 https://thenewstack.io/?p=22717796

Washington is the best state to be a web developer, followed by Virginia, Maryland, Colorado and California. Web design company

The post Dev News: Best States for Web Devs, Slint Adds Rust Support appeared first on The New Stack.

]]>

Washington is the best state to be a web developer, followed by Virginia, Maryland, Colorado and California. Web design company Digital Silk ranked the states based on average base salary, remote working statistics and quality of life scales in each state. Quality of life includes considerations such as rent, food and transportation costs. Those factors were then combined into an index score.

Washington ranked 82.6 out of a possible 100 in part because it had the highest average web developer salary of $138,780. The state also scored 10/10 on the index for the amount of web development jobs per 1,000 residents.

Virginia placed second with high scores for the number of remote working opportunities — 22.3% of Virginia’s entire workforce works from home, according to Digital Silk. The base salary for web developers there is the second highest in the country, at $101,060. Maryland ranked third with an average web developer salary of $93,160. The state also had the second-highest number of people working remotely.

Finally, Colorado and California rounded out the top five, with Colorado offering an average web developer salary of $80,270 and nearly 24% of the workforce being remote. California — despite being home to Silicon Valley — pays an average web developer salary of $99,620, but it has the highest cost of living among the top 10 states.

Massachusetts, Illinois, Minnesota, New Jersey and Utah rounded out the top 10. Of those, Utah had the lowest base web development salary at $61,330, which was mitigated by the lowest cost of living and a 20% remote workforce.

Slint Improves Support for Rust, C++

Slint 1.2 was released this week with more support for microcontrollers and an improved platform API for Rust and C++ that will enable new use cases, the release note said.

Slint is a graphic user interface toolkit for desktop and embedded applications written in Rust, C++, or JavaScript.

The new use cases supported include embedding Slant UI as a plugin in foreign applications, such as digital audio workstations, and developing C++ applications with Slant on microcontrollers, since the majority of microcontroller SDKs are based on C/C++.

There’s also new support for the Espressif IDF framework, which is a C-based SDK that makes it easy to target microcontrollers from the ESP32 family, a type of microcontroller.

Slint 1.2 also adds a new lightweight and experimental LinuxKMS backend.

“Often the user interface on an embedded device is implemented via a single full-screen application,” the Slint team explained. “In such a device, a windowing system like X11 or Wayland adds no value and slows down the device startup.”

The LinuxKMS backend renders directly to the screen with OpenGL or Vulkan using Linux’s KMS/DRI infrastructure, for maximum performance, they added.

Turbo 8 Drops TypeScript

David Heinemeier Hansson, the creator of Ruby on Rails, announced that he’s dropping TypeScript from Turbo 8, a library commonly used with Rails, in favor of plain old JavaScript.

“By all accounts, TypeScript has been a big success for Microsoft. I’ve seen loads of people sparkle with joy from dousing JavaScript with explicit types that can be checked by a compiler,” Heinemeier wrote. “But I’ve never been a fan. Not after giving it five minutes, not after giving it five years. So it’s with great pleasure that I can announce we’re dropping TypeScript from the next big release of Turbo 8.”

Tweet that reads "Days without TypeScript drama: 1"

This argument boils down to how a developer feels about strong typing, which is assigning a data type to a variable or expression. This allows the compiler or interpreter to understand the type of data that is being stored or manipulated, and to ensure that the code is correct. There are two main types of typing:

  • Static typing, where the data type of a variable is known at compile time, allows the compiler to check the code for errors before execution.
  • Dynamic typing, where the data type of a variable isn’t known until runtime, which means the compiler can’t check for errors before execution.

TypeScript is a statically typed language, while JavaScript is a dynamically typed language.

JavaScript is so capable now, with browsers being able to interpret it without any need for a compiler, Hansson contended.

“TypeScript just gets in the way of that for me. Not just because it requires an explicit compile step, but because it pollutes the code with type gymnastics that add ever so little joy to my development experience, and quite frequently considerable grief,” he wrote.

Needless to say, this led to a debate among frontend developers on social media, but this is an ongoing debate that has even generated academic research.

“Due to its dynamic and flexible nature, however, JS applications often have a reputation for poor software quality,” researchers from the Institute of Software Engineering at the University of Stuttgart noted in their paper. “While the type-safe superset TypeScript (TS) offers features to address these prejudices, there is currently insufficient empirical evidence to broadly support the claim that TS applications exhibit better software quality than JS applications.”

The post Dev News: Best States for Web Devs, Slint Adds Rust Support appeared first on The New Stack.

]]>
Codeanywhere Founders Take on GitHub Codespaces with Daytona https://thenewstack.io/codeanywhere-founders-take-on-github-codespaces-with-daytona/ Wed, 06 Sep 2023 18:55:30 +0000 https://thenewstack.io/?p=22717607

Codeanywhere was one of the first web-based code editors when it was released in 2009 (as PHPanywhere). Since that time,

The post Codeanywhere Founders Take on GitHub Codespaces with Daytona appeared first on The New Stack.

]]>

Codeanywhere was one of the first web-based code editors when it was released in 2009 (as PHPanywhere). Since that time, entire developer environments have migrated to the cloud and Codeanywhere has found itself slipping behind in the ultra-competitive Cloud IDE market, where newer products like GitHub Codespaces and Replit have taken center stage. To rectify that, three Codeanywhere veterans — Ivan Burazin, Vedran Jukic and Goran Draganić — are launching a brand new company, called Daytona.

Daytona is being promoted as “a secure alternative to GitHub Codespaces” and offers a twist on the successful cloud IDE formula, by allowing enterprises to “self-manage” Daytona on their own infrastructure. I spoke to co-founder and CEO Ivan Burazin to find out more.

How Does Daytona Differ from GitHub Codespaces?

Before we get to the competition with GitHub Codespaces, let’s first clarify how Daytona differs from its predecessor, Codeanywhere.

“The main thing is that Codeanywhere is more of an interface product and this [Daytona] is more of an infrastructure,” said Burazin.

By “interface” he meant that Codeanywhere simply provided a cloud interface to a developer environment that it hosted. But Daytona is more complex than that.

“What we learned from building out Codeanywhere was actually the infrastructure that we spun up underneath,” Burazin continued. “And the knowledge of how to spin these development environments up effectively, is basically what Daytona is.”

With Daytona, a developer can use their local IDE instead of (or alongside) a cloud-based one. So if they already use a software product like VS Code or a JetBrains IDE, it is compatible with Daytona.

“The thing that we do is we remove the developer environment from the local machine into the cloud, or remote server, or whatever,” said Burazin, “and all the connecting of that remote developer environment with the IDE is what we do in the background; and we spin them up and spin them down. And the user, the developer, feels like they’re working locally.”

Now, it should be noted that GitHub Codespaces also allows its users to work on local environments. On its homepage, GitHub notes that Codespace users can “use Visual Studio Code, Jupyter, or JetBrains.” This is enabled via extensions in those particular desktop software products. But, GitHub clarifies in its FAQ, the actual hosting of a codespace is done in GitHub’s cloud.

What Daytona is offering is essentially the ability to self-host development environments behind a firewall. This is the core difference from GitHub Codespaces.

Via Daytona

Yet Another Cloud IDE…Er, SDE

In order to focus fully on Daytona, Burazin and his co-founders are ceasing work on Codeanywhere — it will “wind down eventually,” said Burazin.

So what was the motivation to build another cloud IDE product, especially given it’s already a crowded field? Burazin replied that the research they did indicated that enterprise companies want a secure, scalable development environment that works across local machines and cloud.

The word “secure,” by the way, is doing a lot of work here. When Daytona says that it is “a secure alternative to GitHub Codespaces,” it simply means that the ability to self-host (behind a firewall) is inherently more secure than hosting on an external provider (like GitHub).

According to Burazin, there weren’t many options on the market that offered self-hosting, and so he said that many companies built internal products to satisfy that need. He mentioned Uber, Shopify, LinkedIn and Eventbrite as examples.

Other companies, Burazin continued, have tended to rely on technologies like Citrix Server to enable remote access to IDEs outside of the corporate firewall.

“To make sure the code is secure, they would essentially give them a VM [virtual machine] inside the firewall, and the only way for developers to interface with that would be through Citrix Terminal Services. And so they’d essentially have to stream the IDE; so it was laggy, slow, a pain in the butt. So a product like Daytona allows enterprises to spin these developer environments securely behind the firewall, but the engineer or the developer can use a local IDE, so that it feels they’re working locally.”

Other than GitHub Codespaces, I asked Burazin who else is a competitor to Daytona.

“The closest competitors […] are Codespaces and Gitpod, in the sense of how the products were created — but neither of those enable you to self-manage it,” he said, adding that “the only product that does is Coder.”

Coder, which we’ve covered before on The New Stack, promotes itself as “Your Self-Hosted Remote Development Platform.”

Perhaps in order to differentiate, Daytona has coined a new acronym for its product: SDE, which stands for “standardized development environments.”

“SDEs not only provide a cloud-based development platform but also ensure uniformity across the development lifecycle,” the company stated in its launch press release.

I’m not sure a new acronym helps in a market already confused by what is or isn’t a “cloud IDE.” Daytona’s other main competitor, Gitpod, uses the term “cloud development environment” (CDE). According to Burazin, a CDE is “a subset” of the SDE concept.

Conclusion

Regardless of all the acronyms, the ability to self-host a development environment does seem like an enticing product offering to enterprises. The big question is whether GitHub (which of course is owned by Microsoft) will also offer that in due course. But for now, Daytona is primed to take on Coder with this functionality.

The post Codeanywhere Founders Take on GitHub Codespaces with Daytona appeared first on The New Stack.

]]>
What Can You Expect from a Developer Conference These Days? https://thenewstack.io/what-can-you-expect-from-a-developer-conference-these-days/ Wed, 06 Sep 2023 14:16:39 +0000 https://thenewstack.io/?p=22717375

What can you expect from a developer conference these days? Two topics in particular: the developer experience and AI. Developers

The post What Can You Expect from a Developer Conference These Days? appeared first on The New Stack.

]]>

What can you expect from a developer conference these days? Two topics in particular: the developer experience and AI.

Developers spend much of their time not coding, said Ivan Burazin, Chief Development Experience Officer at InfoBip, in a recent discussion on The New Stack Makers before the Shift Conference in Zadar, Croatia. Burazin started the conference and sold it to Infobip, a cloud communications company.

When thinking about the developer experience, Burazin cited how developers waste about 50 to 70% of their productive time not coding, Burazin said. Productive time means after vacation time, meetings, and other matters get subtracted.

But the time keeps getting lost when considering how that core time gets eaten away by non-coding work. A developer has to wait to spin up an environment. Tests take away from a developer’s core time, as do builds. Start to add up the hours, and the time starts to melt away. Setting up a developer time takes 2.7 hours a week. For tests, it’s over three hours a week. And for builds, it’s almost four hours a week.

The developer experience becomes a root matter, which divides into an internal and external realm. In an external capacity, the developer’s customer experience becomes what matters. Internally, it becomes a matter of velocity, meaning the amount of code a developer deploys.

“But at the same time, the experience developers has to be better or more enjoyable because in a sense, they will actually be able to produce more faster,” Burazin said.

This all comes back to the overall developer experience, something Buazin pays attention to with Shift, coming up Sept. 18-19.

At Shift, the conference has talks on six stages, Burazin said. One stage will focus on the developer experience from an internal and external perspective.

The developer experience topic is new, but even newer is AI, which will also be the focus at another stage at Shift.

But what should be covered in a discussion about AI if there are few real experts to move the conversation forward?

Burazin said it’s more about how people can use AI to build a product, service, or company. Every company will become an AI company in the future.

“How can you build something utilizing AI and that’s how we look at setting up themes on that stage,” Burazin said.

The post What Can You Expect from a Developer Conference These Days? appeared first on The New Stack.

]]>
Dev News: Astro 3.0, State of CSS, and React Opinions https://thenewstack.io/dev-news-astro-3-0-state-of-css-and-react-opinions/ Sat, 02 Sep 2023 13:00:41 +0000 https://thenewstack.io/?p=22717275

Astro 3.0 was released this week, making it the first major web framework to support the View Transitions API. This

The post Dev News: Astro 3.0, State of CSS, and React Opinions appeared first on The New Stack.

]]>

Astro 3.0 was released this week, making it the first major web framework to support the View Transitions API. This API enables fade, slide, and even persist stateful elements across the page navigation, which previously were only possible inside JavaScript Single Page Apps, according to the release notes.

“View Transitions are a set of new platform APIs that unlock native browser transition effects between pages,” the release note explained. “Historically this has only been possible in Single Page Applications (SPAs), but web browsers and spec authors have been working hard over the last few years to bring native page transitions to the platform, and Astro 3.0 is the first web framework to bring them to the mainstream.”

Developer and designer Joe Bell created a demo that puts some of the Astro View Transitions on display, but essentially it allows developers to:

  • Morph persistent elements from one page to another;
  • Fade content on and off the page for a less jarring navigation;
  • Slide content on and off the page; and
  • Persist common UI across pages, with or without refresh.

“The best part about View Transitions in Astro is how simple they are to use. With just 2 lines of code, you can add a subtle — yet tasteful! — fade animation to your site,” the release notes stated.

Other updates include:

  • Image optimization;
  • Astro components render 30-75% faster;
  • SSR Enhancements for serverless, which means new ways to connect to the hosting platform;
  • HMR Enhancements for JSX, which means fast refresh support for React and Preact; and
  • Optimized Build Output with cleaner and more performant HTML.

State of CSS: CSS-in-JS Trend Plateaus

The CSS-in-JS sector has plateaued, according to the 2023 State of CSS survey. The report, published this week, surveyed 9,190 developers around the world about their use of CSS.

CSS-in-JSS allows programmers to style their components by writing CSS directly in their JavaScript or TypeScript code. The report suggested the reason behind the plateau may be that native CSS is adopting many of the main advantages of CSS-in-JS.

The report also found that newcomer Open Props has generated a “small but passionate” following that’s eager to retain the framework. Open Props, which became available in May, was created by Adam Argyle, a Google software engineer who is also the creator of Tailwind CSS.

Meanwhile, Bootstrap is the most used framework, which is interesting because it also had the most developers (at 41%) who said they would not use it again. Tailwind ranked as the major CSS framework that developers are happiest to keep using.

CSS Retention over time

Graph from the 2023 State of CSS report

That’s Just, Like, Your Opinion, Man

Ryan Hoffnan is a full-stack developer who describes himself as “frontend oriented.” Recently, he raised the question of whether React, the unopinionated JavaScript framework, is becoming opinionated.

An unopinionated framework doesn’t dictate how developers structure their code or use third-party libraries, he explained. React has taken steps toward being opinionated, such as using folder trees as routers, he contended.

“For example, the official React documentation now recommends using Next.js or Remix for server-side rendering (SSR),” Hoffnan wrote. “These frameworks provide a number of features that can help developers build more efficient and scalable React applications, but they also come with a set of opinionated choices about how code should be structured and organized.”

He theorized this may be a maturity issue, since there’s now a wider range of third-party libraries and tools available. This adds to the appeal of creating an opinionated framework, which simplifies finding tools.

“Another reason for React’s increasing opinionatedness is that companies are increasingly looking for ways to reduce development costs and time to market,” he wrote. “Opinionated frameworks can help to achieve these goals by providing developers with a pre-configured set of tools and libraries that are known to work well together.”

The post Dev News: Astro 3.0, State of CSS, and React Opinions appeared first on The New Stack.

]]>
Vercel’s Next Big Thing: AI SDK and Accelerator for Devs https://thenewstack.io/vercels-next-big-thing-ai-sdk-and-accelerator-for-devs/ Thu, 31 Aug 2023 14:13:08 +0000 https://thenewstack.io/?p=22717122

Few companies have had a bigger impact on the frontend developer ecosystem in the 2020s than Vercel, steward of the

The post Vercel’s Next Big Thing: AI SDK and Accelerator for Devs appeared first on The New Stack.

]]>

Few companies have had a bigger impact on the frontend developer ecosystem in the 2020s than Vercel, steward of the popular React framework, Next.js. When I first wrote about Vercel, in July 2020, the company had just embraced the Jamstack trend and was liberally using the term “serverless” in its marketing. But with Jamstack on the decline and serverless less of a buzzword now, it’s no surprise that Vercel has latched onto the latest Next Big Thing: generative AI.

Vercel’s relatively new AI SDK has quickly gained traction amongst JavaScript developers — it’s currently running at 40,000 weekly downloads on npm. The reason, of course, is the incredible popularity of AI applications in 2023. Vercel’s CEO Guillermo Rauch tweeted last week that “building AI apps is the #2 reason folks are signing up to @vercel these days, ahead of social/marketing & e-commerce, based on signup surveys.” (While he didn’t specify what was #1, a commenter said it was easy-to-deploy Next.js projects.)

What Is the Vercel AI SDK?

Vercel defines the SDK as an “interoperable, streaming-enabled, edge-ready software development kit for AI apps built with React and Svelte.” It supports React/Next.js and Svelte/SvelteKit, with support for Nuxt/Vue “coming soon.” [Update: Vercel has advised that Nuxt and Solid.js frameworks are both now supported.] On the LLM side of things, the SDK “includes first-class support for OpenAI, LangChain, and Hugging Face Inference.” To complement the SDK, Vercel also offers a playground that has over twenty LLMs on tap.

The appeal of the Vercel AI SDK is similar to what made Vercel so popular with JavaScript developers in the first place: it abstracts away the infrastructure piece of an application.

So how does the SDK compare to existing LLM app stack tools, like LangChain? I checked with Rauch, who said that the Vercel AI SDK is “focused on helping devs build full, rich streaming user interfaces and applications with deep integration/support for frontend frameworks,” whereas “LangChain is focused on ETL [Extract, transform, and load] and prompt engineering.”

Rauch added that the AI SDK has an integration with LangChain. “Devs can use LangChain for prompt engineering and then use the AI SDK for streaming and rendering output in their applications,” he said, via X/Twitter direct message. He pointed me to the LangChain page in its documentation for further reference.

Example AI App: Memorang

To show off its new-found AI prowess, Vercel held an AI Accelerator Demo Day this month. The overall winner was a startup called Memorang, an ed-tech platform described by Vercel as “a complete platform for building AI-powered courses & study apps for any subject.”

Memorang is currently in private beta, but its quick introduction on Demo Day gave us a glimpse into what an AI-based application is nowadays. The founder and CEO, Dr. Yermie Cohen, explained that Memorang was “built on the modern and evolving AI stack, including Vercel, much of which didn’t exist months ago.”

Memorang platform

Memorang platform (click for large image)

The first part of Memorang is an “AI-powered headless CMS” called EdWrite, which makes heavy use of generative AI for content generation — in this case, for educational material. Cohen pointed out the scaling benefits of using AI for this type of content. “Your custom workflows are effectively a content cannon that you can aim and fire to build thousands of assessments,” he said.

Using this content, Memorang is able to provide customers (presumably education organizations) with “AI-powered web and mobile study apps that are composable and white labeled.” He then discussed some of the benefits to users of this approach. “When a user completes a study session they get a personalized AI analysis of the performance behavior and tips to improve,” he noted. “Then when reviewing their answers, our AI learning assistant helps them learn more and dig deeper into each practice question.”

Memorang EdWrite

Memorang EdWrite

The AI Engineer Stack

While Cohen didn’t discuss the tech stack Memorang is using to create its platform, you can get a clue from looking at the company’s current job vacancies. Specifically, check out these requirements for the job of Full-Stack AI Engineer:

  • Expertise in TypeScript/JavaScript
  • Advanced knowledge of best practices in prompt engineering
  • Completed projects using OpenAI +/- Langchain
  • Experience with vector databases and semantic search
  • Expertise in the serverless stack, including GraphQL
  • Deep understanding of NoSQL database design and access patterns
  • Frontend skills involving React (understanding of hooks, components)
  • University Degree (technical field)

The list of tools, libraries and frameworks for the role is as follows:

  • Langchain.js
  • AWS Lambda
  • Pinecone / Weaviate
  • DynamoDB / MongoDB
  • Neptune/Neo4j
  • React + React Native
  • GraphQL
  • Next.js

Clearly, React is a big part of building Memorang’s user interface and hooking into AI stack components like LLMs, vector databases and LangChain.

Next Big Thing

For those developers wanting to check out a publicly available AI app, Vercel has a Pokedex template that uses the following tools:

  • Postgres on Vercel
  • Prisma as the ORM [Object-relational mapping]
  • pgvector for vector similarity search
  • OpenAI embeddings
  • Built with Next.js App Router

But probably the best place to get started with the Vercel AI SDK is with Vercel’s quickstart documentation. It has instructions for both Next.js and SvelteKit. If you’re still looking for ideas, check out Vercel’s AI app templates and examples.

One final note: clearly Vercel is not yet finished rolling out its AI features. This recent comment on X, by Vercel VP of Developer Experience Lee Robinson, sums it up:

Next big thing? "Gotta be AI, in some form, and there's lots we could do here!"

The post Vercel’s Next Big Thing: AI SDK and Accelerator for Devs appeared first on The New Stack.

]]>
A Developer’s Guide to Working with Icons https://thenewstack.io/a-developers-guide-to-working-with-icons/ Tue, 29 Aug 2023 15:21:28 +0000 https://thenewstack.io/?p=22716676

Working on User Interface (UI) back in the early 90’s, before any of the common standards were established, was interesting.

The post A Developer’s Guide to Working with Icons appeared first on The New Stack.

]]>

Working on User Interface (UI) back in the early 90’s, before any of the common standards were established, was interesting. I remember working on a communications package and the icon image for the application’s close button was a hand-drawn sunset. The developer wasn’t happy when I pointed out that an icon a user had to think about too much wasn’t really that useful.

Icons are getting slightly rarer within user interfaces. While they still have their purpose — for example, standing in for menus, especially for smaller screens — they are used less often than they used to be. Experimentation is more forgiving, and touch interactions allow for more control of the screen. Futurists assumed that the world’s population would read text less and rely on images more. Of course, today we actually create and consume a great deal more English text than in the past, largely because of mobile phones. We use emojis to accompany our missives, but icons are relegated to a visual shorthand where needed. But they are still everywhere.

If you work on a user interface, you may well need to select icons — even if a graphic designer will construct the final resource. This post guides the developer who has taken responsibility for UI through the stages of icon selection.

On the left is the icon strip taken from an app for a well-known social media platform. (Since I captured this, one of the icons that represents “verify” has changed. Suffice it to say, quite a lot about this app has changed recently.) The icons are usually paired with words if the media query detects enough room, but the strip on the left will be used otherwise.

Most of the images feel familiar. About half use retro representative objects (the bell, the magnifying glass, an envelope). Simple shapes with minimal drawing are preferred, and we will see why later.

Note that the single simplistic pawn-like human figure is used for “Profile”, which means that you might be puzzled as to what the two pawn figures next to each other represent — it is actually the new “Communities” functionality, but it shows that using multiple icon shapes is unwise if the intended meaning is unrelated to the single case.

By comparison, on the right is the icon strip for another similar social media application — it’s a third-party app for another well-known platform. Note that some of the icons are taken from the platform graphics (the at symbol, the heart, the octothorpe) so only a regular user of this social media platform could immediately discern their meaning.

There is also an even more retro representation; a feather (or quill) standing in for “compose”. Where possible, icons should make a thematic set; that is, they should be derived from similar viewpoints. This helps users to work out meaning without cognitive friction. Obviously, a feather is not really coming from the same place as some of the other symbols. Or worse, a feather and a bell might be linked with gothic horror.

But the circular icon third from the bottom is quite unclear — in fact, it stands for “local”. Oddly enough, it appears to be two figures next to each other… again.

The Old Ones

Skeuomorphism is the design practice of making a user interface resemble the real-world object it is emulating. The term also gets used for icons that refer to old objects the real world doesn’t see that much of anymore. If you are old enough, you will have picked up a telephone receiver. Yet the icon for “phone call” remains the one below today. Similarly “message” uses an envelope, as if we send letters off regularly like in a Jane Austen novel. This type of retro is now such a common feature it is not really commented on anymore. Old objects simply were more closely identified with a single function, so are better for icon design. An old home telephone only ever did one thing.

Example Icon Selection

To give ourselves a challenge, let’s think about an icon to represent “seasons”.

A quick word on licensing and ownership. If you take icons intended for production use, check licensing and attribution conditions. The Noun project is a good source of reusable icons with simple attribution terms. They actually have an API, but in general, a web page is the best way to explore the images. While generative AI is in limbo in terms of copyright, projects like Adobe Firefly use AI, but only over their own extensive stock of images. Large corporations will usually have art resources, although they may not be immediately appropriate for software use.

On my first try, I picked these two images from the Noun Project after entering in “seasons”:

Both try to encapsulate all four seasons. The first icon doesn’t seem to represent the usual seasonal order, and the tree may not represent autumn for everyone. But it does nicely represent four quarters, and the circular nature underlines a life process.

The second image does better to represent seasons as part of a cycle. Each image could be separated as required for further use. Using a leaf for spring breaks the climatic metaphor used in the other three seasons; whether a rain cloud is autumnal comes down to where you live.

Of course, we have another option these days. I asked Midjourney to produce an icon. While it is hard to keep it to simple lines, it can easily produce a more artistic and quixotic outcome:

The trick to finding the right image is to focus on the context of the term you need. Find all the aspects of the meaning; and make sure you know which aspects are needed in your app.

For “seasons” we have multiple competing aspects:

  • Spring, Summer, Autumn, Winter
  • Natural cycle
  • Four quarters, division, period
  • Climatic changes

This seems trivial, but different apps will need to place emphasis on different aspects, which then guides how an icon could be selected. For a gardening app, we definitely care about the order and the cycle. For a tax app, it is the quarters and division of a year that might be of interest.

We know that icons need to scale for use within apps that adapt to screen size. Predictably the simpler shapes of the second icon survive reduction to common size specifications better:

The Midjourney icon, probably because of the color range, actually retains its artistic feel even as it becomes indecipherable.

Icons are prey to CSS operations on their images. So if they are selectable we have to think about how the CSS will affect a button as it is selected, hovered over, inactive, etc. We can see a little of this going back to the social media icons:

The familiar birdhouse icon (which is no longer so apposite) on the top line alters by inversion as the text thickens to indicate selection; and gains a blue mark to indicate activity. The home icon from Elk uses a color change to indicate selection, whereas the Bluesky home icon goes for text thickness and inversion again.

This is further complicated by color “themes” — like “dark”, which all the apps in the above examples are running. This has led to icons being selected with the following properties:

  • Simple, probably retro objects that quickly indicate functionality.
  • Single-line drawings, which can change color or thickness and retain meaning.
  • Shapes that can invert easily, like two of the “Home” icons above.
  • Shapes that are reasonably scalable.

Product icons are always a different kettle of fish, as they are acting on a far larger stage. It isn’t unusual for a product icon to be derived from one of the in-app icons that are unique to the product — but they succumb to an additional range of corporate forces beyond the remit of this post. However, like most long-lived software, they go through revisions over time.

With so many ways to get icons, it is easy to find suitable candidates for probably 70% of what you need. It is only those last items in the set that you may need to work on. Try to keep thematic similarity, and remember that the actual art in the image can easily be changed — and probably will be.

The post A Developer’s Guide to Working with Icons appeared first on The New Stack.

]]>
Dev News: Deno’s Fresh Updates, New Bun and Whither Gatsby https://thenewstack.io/dev-news-denos-fresh-updates-new-bun-and-whither-gatsby/ Sat, 26 Aug 2023 12:08:22 +0000 https://thenewstack.io/?p=22716632

Fresh 1.4 is out. Fresh is a relatively new full-stack web framework for Deno. This update focuses on the overall

The post Dev News: Deno’s Fresh Updates, New Bun and Whither Gatsby appeared first on The New Stack.

]]>

Fresh 1.4 is out. Fresh is a relatively new full-stack web framework for Deno. This update focuses on the overall developer experience. According to Marvin Hagemeister, who leads the project full time, the update includes faster page loads with ahead-of-time compilation, custom HTML, head and body tags, and making it easier to use shared layouts.

Fresh compiles assets on the fly, which Hagemeister noted enables lightning-fast deployments with no build step. But the Fresh team realized that just-in-time rendering with large “islands” was sluggish. (Deno defines islands as “isolated Preact components that are then hydrated on the client within a statically-rendered HTML page.”)

“We arrived at a pre-compile solution that results in assets being served about 45-60x faster for a cold start of a serverless function, with minimal impact on deployment times,” Hagemeister wrote in a blog post on Deno’s website “The savings depend on the size of the island, but even for small ones the improvements are very visible.”

He added that Fresh will always use JIT completion when running the development server, so that the server can respond to the API as quickly as possible without waiting for the asset compilation to finish.

Fresh 1.4 also allows developers to render the HTML document rather than having it create the outer HTML structure, up to the body tag, internally.

The updated Fresh also incorporated support for _layout files, which can be put in any route folder and Fresh will detect all the layouts that match and stack them on top of each other. The example Hagemeister provided is when developers want the header or footer to match across a website.

“Think of the header or footer of a website which is the same component across routes,” he wrote. “Previously, you could do that in routes/_app.tsx, but there was no way to go beyond that. Creating a shared layout for some sub routes in your app required extracting the code into a component and importing it into all routes manually.”

For its next trick, Fresh plans to overhaul its plugin system to make it easier to use.

Whither Gatsby?

It’s been a month since the last commit to Gatsby, the Jamstack framework and platform company, according to its GitHub repository. Fred Schott, the co-creator of the web framework Astro, first noted that there had been no commits in the past 24 days and zero pull requests.

His observation came as a reply to a Twitter thread about Gatsby.

“Is Gatsby dead?” asked Sébastien Lorber, “docusaurus maintainer” at This Week in React. “Not ‘officially’ but its future does not look super promising.”

Netlify, which acquired competitor Gatsby earlier this year, responded to Schott to say that it’s still investing in Gatsby: “Fear not! Gatsby is vital to a great many of our customers. Updates to React 18 and Gatsby Adapters are the most important update, but we’re also busy investing in platform primitives for a stronger Gatsby.js (and other frameworks, too).”

This discussion comes on the heels of a July restructuring at Netlify related to a shift in business focus that is tangentially related to Gatsby.

“Late last year, we made a conscious decision to evolve the business plan to expand beyond our core Netlify product, and introduce a web development platform that serves the needs of not only developers (historically our core audience), but also Enterprise Architects and Marketers,” wrote CEO Mathias Biilmann, in a message to employees on Netlify’s blog.

“As we’ve shared with you in many Kickstart meetings and town halls, we have been building a differentiated vision that is the right one for the long-term success of Netlify. Our acquisitions of Gatsby and Stackbit have been a significant part of this fundamental evolution.”

He added that a priority will be investing in initiatives that further Netlify’s business goals and cease to invest in things that don’t.

“We need to focus on investing in Product and Engineering initiatives that help us solve business problems for the Enterprise that no one else can,” Biilmann wrote.

After that July announcement about restructuring and layoffs, Sam Bhagwat, Gatsby’s co-creator, tweeted, “If you saw the news around the Netlify layoffs yesterday, there are some great ex-Gatsby folks who are looking for a new role.”

Bun 1.0 Release Scheduled

Bun version 0.8.0 is out, with new debugger support and fetch streaming. This version also unblocks SvelteKit. Bun is a JavaScript runtime, bundler, transpiler and package manager.

The project now implements debugger support via WebKit’s Inspector Protocol, according to the release notes. The notes also report that improved support for environment variables in Worker has unblocked SvelteKit, and it recommends scaffolding your project with create-svelte.

Other updates:

  • Support for Nuxt.
  • Fetch response body streaming, which means developers can now stream data from a fetch response, instead of waiting for the entire response to be downloaded.

Bun 1.0 is set to launch Sept. 7.

The post Dev News: Deno’s Fresh Updates, New Bun and Whither Gatsby appeared first on The New Stack.

]]>
A Playground for LLM Apps: How AI Engineers Use Humanloop https://thenewstack.io/a-playground-for-llm-apps-how-ai-engineers-use-humanloop/ Tue, 22 Aug 2023 15:52:01 +0000 https://thenewstack.io/?p=22716352

In the evolving LLM app stack, a British company called Humanloop has — perhaps accidentally — defined a new category

The post A Playground for LLM Apps: How AI Engineers Use Humanloop appeared first on The New Stack.

]]>

In the evolving LLM app stack, a British company called Humanloop has — perhaps accidentally — defined a new category of product: an LLM “playground.” It’s a platform where developers can test various LLM prompts, and then deploy the best ones into an application with full DevOps-style measurement and monitoring.

To understand exactly what Humanloop offers developers, and how it became one of the leading “playground” toolsets, I spoke to Humanloop co-founder and CEO, Raza Habib.

I first learned of the term “playground” in the LLM app stack diagram created by Andreessen-Horowitz (a16z). But what does it mean and where did the term originate?

a16z_emerging_llm_stack

Via a16z; Click image to view full-size.

Habib, who holds a Ph.D. in machine learning from University College London, explains that it derives from OpenAI.

“When OpenAI first released GPT-3, they just wanted to have an environment where people could go try the model — and they called it the playground. And so […] the name has stuck around. But I think the point is that it’s an environment to interactively try things with different models.”

Habib also noted that a16z didn’t initially know where to place Humanloop in its stack.

“I think we could have belonged in a couple of different places on that diagram,” he said. “But at its core, we help developers evaluate and take steps to improve their prompts and AI applications.”

Let’s take a step back. As Habib explained it, LLM applications start with a base model — such as GPT-4 or Claude — or maybe your own large language model. To begin creating an application you need a “prompt template,” which Habib described as “a set of instructions to the model, with maybe gaps for input.” You then “chain together” all of this with other models or with information retrieval systems to build out a whole application.

Where Habib and his co-founders spotted an opportunity in this process was in collaboration — helping technical users work with non-technical users to try different prompts and evaluate them.

“What we’ve found, speaking to people working on this early on, is [that] it’s very iterative,” he said, regarding the process of building an LLM application. “It requires collaboration between technical and nontechnical people to find good problems and get these systems working well. And evaluation is really hard, because it’s very subjective.”

Diagram via Humanloop

Use Cases and How It Works

One of Humanloop’s customers is Duolingo, a popular language education application. As with many other tech companies over the past year or two, Duolingo has been busy adding AI to its core product. A recent blog post explained that it uses AI in a variety of ways, including helping its staff create lessons and “build courses faster.” Writing prompts are at the core of this:

“Here’s how our AI system works: We write a “prompt,” or a set of detailed commands, that “explains” to the AI model how to write a given Duolingo exercise.”

Duolingo is careful to emphasize that the ultimate responsibility for its lessons and courses falls on its human instructors. Nevertheless, it’s clear that AI is helping a lot — both with template design and to “fill in the blanks.”

Where Humanloop comes in is to help Duolingo get the right type of content out of the LLMs.

“It [the content] obviously needs to be appropriate to the learner, the right tone, the right language, vocabulary that’s appropriate, etc,” Habib explained. “So, it’s not trivial to take the base models and actually get them to do what you want. And so what we provide is a set of tools for iterating on, collaboratively, your prompts and your workflows; measuring performance in production; and then also being able to monitor and evaluate things over time.”

Typically Humanloop is used at the prototyping stage. A team of people will open Humanloop (which is browser-based) and they will see “a playground-type environment.”

“They can try out different models, they can try out different prompts,” Habib continued. “They can include that in a sequence or workflow. They work on that till they get to a point where it seems to be working reasonably well [and] now it’s time to go in and try it out more seriously, beyond just eyeballing things. They’ll then typically run more quantitative evaluation, and so we have the ability to set up evaluation functions. They’ll deploy that to production, and they’ll monitor how well it’s working — including being able to gather end user feedback.”

A similar workflow happens when doing tweaks or testing out new prompts, so it’s an iterative process that doesn’t stop after the application has been deployed.

Playing with Others

I asked whether Humanloop can be used in tandem with other products in the LLM app stack, such as the orchestration framework LangChain and vector databases like Pinecone?

“It integrates natively with LangChain [and] a couple of others,” he confirmed. “So you can switch on an environment variable in LangChain, and then you’ll automatically start getting logging and monitoring of your applications in Humanloop. So it’s really like a one-line code change and then suddenly you can see what data is flowing through, and start gathering feedback and take actions to improve and debug.”

Habib noted that Humanloop has a feature similar to OpenAI’s functions, which it calls “Tools.” This allows users to “connect an LLM to any API to give it extra capabilities and access to private data” — for example, to connect to a vector database. But Habib cautioned that Humanloop isn’t an orchestration framework like LangChain.

“We believe that that’s best done in code,” he said, regarding orchestration. “We’re primarily there to manage the prompt engineering and then evaluate and take steps to improve those models.”

Advice for AI Engineers

The primary users of Humanloop are developers. With the current popularity of LLM applications, I asked Habib what advice he’d give to developers who want to do more work in this area.

“In terms of new skills you want to learn, I think having an awareness for how the models work and an appreciation that this is now stochastic. So if you haven’t had any experience with machine learning before, and you’re coming into it, you’re probably coming from a world in which software is deterministic — [where] you can write unit tests and it always does exactly the same thing.”

With LLMs, though, software isn’t necessarily deterministic. So learning to deal with that randomness and developing an intuition about the limits of LLMs is important, in Habib’s view. Which, of course, is where an LLM playground comes into play.

The post A Playground for LLM Apps: How AI Engineers Use Humanloop appeared first on The New Stack.

]]>
The Project Helping All Browsers Deliver the Same Web Platform https://thenewstack.io/how-interop-2023-will-move-the-web-forward/ Tue, 22 Aug 2023 12:00:07 +0000 https://thenewstack.io/?p=22715864

Last year Interop, “an effort to increase interoperability across browsers in key technical areas that are of high priority to

The post The Project Helping All Browsers Deliver the Same Web Platform appeared first on The New Stack.

]]>

Last year Interop, “an effort to increase interoperability across browsers in key technical areas that are of high priority to web developers and end users,” delivered significant improvements in all four main browsers. This time around, it’s not just about getting individual browsers to make feature implementations more compatible with the same feature in other browsers, but about measuring how well all the browsers deliver the same web platform.

For browser makers, the need to allocate development resources can make it feel like there’s a tension between adding new features to the web platform and going back to fix interoperability issues in features that have already been shipped. Interop focuses on technology that is already specified in web standards with shared test suites in web-platform-tests (WPT), but it covers a mix of features that have already shipped (in some or all browsers) and features that are still being implemented — some rather belatedly.

Interop is open to any organization implementing the web platform, but since it’s about having web platform technologies work the same way across multiple browsers, it’s run by the four major browser implementers and two open source consultancies who do a lot of work on the web platform: Apple, Google, Microsoft, Mozilla, Bocoup and Igalia.

Interop progress as of August 2023

Interop progress as of August 2023.

Government regulation on competition is one part of pushing that interoperability forward; the Interop project is another key part, Rick Byers, web platform area tech lead at Google, told The New Stack. “It’s in society’s interest as a whole when browser vendors feel the need to be interoperable,” he said.

“It’s really good that there’s companies out there that realize that the web standard process is not only for people that build a browser and want to show ads, but that everybody benefits from it, because we can write more secure and stable applications if the platform gives us solutions that we don’t have to simulate [in JavaScript],” explained developer advocate Christian Heilmann — who used to work on the Firefox and Edge browser teams.

For instance, he pointed out, before the dialog element was supported across browsers, every developer had to build their own with a positioned div element, write the JavaScript code to show and hide the dialog — and usually with tweaks for different browsers. It might sound like a trivial example, but that’s a lot of unnecessary work repeated on every project.

Moving the Whole Platform Forward Together

Browser makers could (and do) fix interoperability and compatibility issues individually using WPT, but the value of Interop is that it makes for more coordination of what all the browsers work on each year by focusing on what developers are seeing problems with. Those developer pain points are gathered through the surveys that MDN runs — both the big annual State of CSS and State of JS research projects and, in future, shorter regular surveys on MDN — and a bug tracker for issues submitted via GitHub, then turned into formal proposals for the Interop participants to vote on in November 2022.

This time around, that generated a lot of requests and suggestions, Igalia developer advocate Brian Kardell told us. “Last year, we weren’t very proactive. We had a wider call for this year, and we left it open a little longer and we had, at peak, maybe 90 different issues open.”

“Interop 2023 is the biggest, most aggressive attempt at interop I think we’ve ever made.” — Rick Byers, web platform area tech lead, Google

There are 26 focus areas, compared to 15 in 2022 (eight of which have been carried forward from previous years), plus several investigations — where there is work to be done but the standard or the web platform tests aren’t mature enough to start implementing. “Among those focus areas are some things that developers have asked us for forever.”

The 26 areas all the browsers agreed to work on range from features everyone uses on the web without realizing, to those last annoying paper cuts in otherwise finished areas.

The point of Interop is often getting multiple browsers to the same stage. Firefox has supported multicolor forms for a long time; the vector color font support that’s part of Font feature detection and palettes brings that to all the main browsers.

Browser progress chart

Browser progress chart

On the other hand, Firefox lagged on Modules in Web Workers. “Right now, Web Workers don’t allow me to use other people’s JavaScript modules. I can send data in and get data out: I cannot have any interaction in between [anything with third-party dependencies],” Heilmann explained. “Web Workers become more important as we do more high-performance stuff in the background, especially with machine learning: you don’t want that on the main thread.”

That’s already improved since Interop 2023 began, as has Firefox support for Color Spaces and Functions, going from passing just over half the tests to almost 95%. That means designers can specify uniform gradients and color shifts so sites look the same in different browsers and on screens with different color gamuts, and developers can lighten or darken colors in CSS without having to recompute them. Operating systems are already beginning to support better color formats and if the web platform follows, “this world of more colorful, rich vibrant things becomes possible,” Kardell explained.

Similarly, at the beginning of 2023, Safari had much better support for Math Functions, which let developers do things in CSS (like tracking mouse cursor position) that used to need Canvas or precompilers: now all three browsers score in the high 90s. Chromium browsers started the year with less support for Masking in CSS: applying the kind of image effects you’d use in a graphics application to a web page, like using an image or a gradient to mask or clip an image. Again, doing that in CSS avoids the need to use canvas for something that helps web apps feel more native on different platforms. Animating a graphic along a custom motion path with CSS is supported in all three browser engines, but doesn’t work quite the same way in all of them.

Making Less Work for Developers 

Many focus areas improve developer productivity, like being able to rely on the border-image CSS property in all browsers for replacing the default border style “with one element rather than five nested elements just to get a frame around something,” Heilmann said. And some go back to basics: URL is about getting all browsers to agree on an implementation of URLs that matches what’s defined in the standard.

“It’s quite amazing how many things are a valid URL and how dangerous that could be.” — Christian Heilmann

Drawing graphics on the screen with the canvas element and API lets you script graphics, but running that on the main browser thread can block anything else a user is trying to do on the page. Offscreen canvas, as the name suggests, puts that offscreen where it won’t interfere with rendering handled in a Web Worker. It’s widely used in game development, Heilmann explained: “We [always] had one canvas that showed the game and we had one canvas that did all the calculations; and that way, the performance was so much better. Standardizing that rather than having to hack it every single time would be a really, really good thing.”

It’s not just a specialized technique though; most web developers use offscreen canvas already but without realizing it, because they use it through a library, Kardell pointed out. “A very small number of libraries need to update to take advantage of that and then suddenly, everybody in all the places that they’re using it will get better performance on maps and drawing tools and Figma and all kinds of cool stuff.”

Custom properties (or custom variables) is another long-standing request, that will make it much easier to use CSS variables — for example, defining colors, font sizes and other settings once, directly in your CSS, rather than putting them in a selection block, which will simplify switching a site between light and dark mode. This focus area is concentrating on @property, which lets you set default and fallback values when you define a custom property in a stylesheet; again, this isn’t new, but it hasn’t been consistent between browsers.

CSS Pseudo-classes add a keyword to specify a special state, like the way a button looks when you hover over it, so you can style it. That will be useful for input, but also media handling for full screen, modal and picture-in-picture; and interoperability is particularly important here, Heilmann noted. “We need to actually make sure that every browser does them the same, because if we have yet another one that is only in Chrome it costs us a lot of time and effort.”

Isolating part of the page with Containment in CSS, so it can be rendered independently, whether that’s a navigation bar or an animation, is “very good for performance” Heilmann said. Although it can be somewhat complex to work, because it requires some understanding of rendering and layers, he suggested most developers will use it through tools like GreenSock rather than directly.

Other focus areas include substantial work on long-standing web developer priorities that will be widely used. “has() and Container queries are literally the number one and number two requests for a decade from web developers and we’re getting both,” Kardell enthused.

How to Unblock Progress

Container queries is a CSS mechanism to reason locally about layout decisions,” Byers explained: “To understand the context, the container you’re in, and do responsive design in a way that works well across components, so you can build more reusable components that work in whatever layout environment they’re put into.”

If you put a component in a container that doesn’t have as much space, you could pick which elements should be hidden, or just switch everything to a smaller font.

“This is a direct answer to the needs of the componentized web for something like reusable components to put together bigger applications,” Heilmann told us.

To Byers, it also highlights the opportunity Interop gives developers to highlight what they need.

“Just a couple years ago, most of the browser engines were saying, ‘we don’t think this can be a feature, we don’t think this is something we can put in our engines’. And now not only does it exist in Chromium and WebKit, and it’s coming in Gecko, but it’s something all three of the major engines believe — that by the end of the year we can have working interoperability and stability and something you can actually depend on. For web developers, that should be enormously exciting, not just because that feature is exciting to them, but because it signals this is the kind of thing that web developers can get done on the web. When they come together and push and say, ‘Hey, this is a capability we really want in the web’.”

Similarly, the idea of having a parent selector (has) to go with the child selector has been in the CSS spec since the first draft of CSS 3 in 1999. Heilmann suggested thinking of it more as a family selector: “It can allow you to look around you in the DOM in each direction, which we couldn’t do with CSS before, and I think that’s a huge step towards people not having to use JavaScript to do some things.” That would make it easy to have one style for an image that has a caption and a different style for pictures that don’t.

Like container queries, the implementation was held up by worries about slowing down page loading (because the rendering engine might have to go up and down the DOM tree multiple times to apply a developer’s CSS rules correctly).

“Performance has always been the barrier, because of the way pages are assembled and because of the way CSS and rendering engines have been optimized in browsers,” explained Igalia web standards advocate Eric Meyer, who built one of the earliest CSS test suites. “When we lay out a page, we generally do one pass: we do not want to have to do multi-pass [rendering] because that’s a performance killer.” Two decades on, computers are faster than when has() was first proposed, but “you could still grind a browser to a halt with container queries or the ‘parent’ selector”.

“It’s a scary thing that no one wants to take up because it’s computationally complex, and it could just really blow up performance,” Kardell added. Years of discussions about these features might have helped make them seem impossible, he suggested.

“When you have things that have been discussed a lot, that seem hard, that look like you could invest a lot of money and a lot of time and it might not go anywhere — it’s unsurprising that nobody wants to break the standoff!” — Brian Kardell, Igalia

In the end, investigation and experimentation by multiple browser teams (and a bottom-up approach suggested by Igalia) showed that these features could work without degrading performance, including compromises to avoid the risk of circular dependencies and undefined behavior.

Some of that relied-on work was done in a previous iteration of Interop, explained Kadir Topal, who works on the web platform at Google, highlighting a new pattern in browser development that’s emerging from the collaborative approach to compatibility. “Since all the browsers shipped that [work], there was something that we could build on top of. I think we’re going to see more and more of that, where we can ship together something in one year, and then build on top of that in the next year. I can already see some of that coming for the next year.”

The technical work is critical for unblocking implementations that give developers what Byers considers “most of what they were asking for” with container queries, but he also noted that this is also part of a different philosophy in building the web platform that’s not just about what browser engines need.

“The larger story for me is the shifting of how we approach platform design to just be more humble and listening to developers more. Browser engineers used to say ‘anything that can introduce cycles and delays sounds really scary, so like I refuse to go there on principle. Over the last several years, I think the industry as a whole, but certainly the Chrome team, has really had this transformation of saying our number one job is to serve developers and really listen and really be empathetic to their pain points. We’re hearing consistently that developers are having problems with this sort of thing. It’s a legitimate problem. What are we going to do about it rather than just say it’s not possible?”

Finishing the Last Mile

Some focus areas continue from previous years: CSS subgrid has now landed in the experimental version of Chromium, using code contributed by Microsoft. Before Interop 2022, only Firefox supported subgrid; Safari added it in 2022 and now it’s going to be broadly available — and interoperable.

Byers compared that to Flexbox, which had been in browsers for years “but it was just so different in different engines for so long, and there were paper cuts all over the place that we’re still cleaning up from. The way grid has happened; as much as we wish some of it would have gone faster, I think it’s the model for how big new things can get added to the web in a way that’s high quality and consistent across browsers, and not that far off timewise from each other in the different engines.”

Flexbox scores are already in the 90s for all the browsers, although polishing off the last bugs takes time.

“When you look at the numbers [for some focus areas], there’s almost a question of ‘why are they in Interop?’” Meyer noted. “That seems really interoperable, how come that’s there?” For example, the stable and experimental numbers for Media Queries were already high.

“That actually points to one of the things that Interop was intended to do: in some cases where things are almost but not quite universal, let’s get them there. There’s just a few bugs across the various browsers that keep us from being 100% across-the-board compatible, so let’s get there.”

With such high scores, he explained, browsers might be tempted to prioritize areas like Web Codecs, where Safari was only scoring 59% at the beginning of 2023, over Media Queries with 99% compatibility. “It would be very easy to say that one percent isn’t worth devoting time to when we have these other areas that really need to be dealt with. Interop is a way of drawing people back to say ‘Hey, we really do need to correct this’.”

In the case of Media Queries, the holdup is Firefox with a score that’s now gone from 82.7% at the beginning of the year to 99% already (and 99.9% in nightly builds). “They really only need to fix whatever that percent tests, and Interop is meant to encourage that.”

Consistency Is When Everyone Wins

One of the most interesting charts in WPT has always been the number of tests that fail in just one browser, neatly illustrating the compatibility failures that can bite developers. This year Interop is highlighting the inverse of that: the number of tests that are passing in all the engines (Blink, Gecko and WebKit). That’s more important than any one browser having a higher score, Byers said. “It’s not about who wins: I want developers to win by that line getting as high as we can make it.

“It’s the only number that really matters,” Kardell suggested. Browsers can score in the 80s or even the high 90s individually, but the Interop score for the focus area might be much lower. “By old standards, that’s off to a pretty good start, but if it turns out that each of them did a completely different 80%, then that’s not the case in any way.”

While Chromium browsers are slowly improving scores for CSS Masking (from 65.8% at the beginning of the year to 69.1% in experimental builds at the time of writing), the Interop score for this focus area is improving faster — from 56.7% to 64.3% — because the work in Chromium is happening at the same time as Firefox and Safari investing further in an area where they were already scoring in the 80s.

Another good example is pointer and mouse events, where the lowest individual score was 46.6% with other browsers achieving 67% and even 80%. “Looking at the individual browser numbers, you might think, well, the worst one is almost half support,” Meyer warned. “But if you create a Venn diagram and in the middle is what they all support consistently, where the three overlap is only a third [of the standard].”

While that 33.3% has improved to 61.3% in experimental browser builds (at the time of writing), this Interop focus area doesn’t cover using touch or a pen with pointer-events — which is important on tablets and phones.

There are lot of IP challenges and patent issues in this area and a messy history (when Microsoft first proposed pointer events, Apple suggested its patents on touch events might block the W3C from adopting it as a standard, and Apple’s resistance to supporting pointer events led to Google planning to remove it from Chrome at one point). But while that explains the significant differences between browsers in this focus area, vendor politics isn’t behind leaving out touch and pen: it’s a more prosaic problem that WPT doesn’t include the tests to cover it, explained Byers (who did a lot of the early work on touch and pointer events and was an editor on the spec until recently).

“A lot of the time [when] things don’t make it into Interop, it’s because the specs aren’t ready or we don’t have tests we can push for conformance.” — Rick Byers

“We have to do a lot of work to pay back the debt of not having touch input well tested,” continued Byers. “And sometimes there’s infrastructure issues like, does WebDriver support simulated touch input on all the major browsers? If not, then we can’t realistically push for common touch behavior across all browsers. We need to do this groundwork first.”

“There’s almost certainly challenges around actually even validating that touch behaves consistently across a Windows computer running Firefox and an iPhone running Safari, and all those different devices. Safari doesn’t support touch on desktop; generally, MacBooks don’t have touchscreens.” Even getting the infrastructure to run the tests will be tricky: “We can get desktops in the cloud to run our tests: it’s just harder to do that for mobile.”

Mobile testing is actually one of the active investigation areas in Interop 2023, because it’s currently not part of WPT’s CI infrastructure, and Topal indicated there would be more investment there in the second half of this year. Investigation areas often lay the groundwork for future focus areas. “Mobile testing as part of the Interop dashboards and scoring is something that we’ll hopefully be able to do next year,” he confirmed.

Always Improving, Never Finished

Between September and November this year, the Interop participants will look at what needs to go into the focus areas for 2024, and that includes assessing progress on Interop 2023. “That’s where the decision gets made on which of the features are now basically interoperable and where there are things we [still] need to keep track of,” Topal explained.

The Web Compatible focus area covers what Byers referred to as “a little grab bag of paper cuts” for “little niggly things that wouldn’t make sense on their own” but didn’t need to be an entire focus area. Some of these were focus areas in previous years: the work isn’t finished, but enough progress has been made that there are only a few issues to clean up.

That’s not just about going the last mile in developer experience, important as that is; there’s a bigger point about the continuous nature of web standards, Kardell pointed out.

“Interop is a strange project because it seems like it shouldn’t exist, because the point of web standards is that they’re standards.” — Brian Kardell

But even when browsers score 100% on these tests, it doesn’t mean that interoperability is done, especially for areas like viewport, which needs to support new classes of hardware as they come out, like folding phones, so the focus area was carried forward even though all the work that was agreed on last year got done. “There will be test cases that we haven’t thought of. There will be places where you still wind up getting a different result. The thing with standards is eventual consistency increasing interoperability, that is hopefully stable.”

As Topal noted: “It’s a living platform: it keeps getting developed, it keeps getting extended.”

That doesn’t mean the Interop list will just keep getting longer. “Everything we had in 2021 is still in the list somewhere,” Byers noted. “I’m not aware off the top of my head of an area that has reached 100% of all tests passed across the board. But whatever the numbers are, if it feels that [tracking an area] isn’t providing value to developers anymore, there’s no sense in us tracking it [in Interop].

What Interop Doesn’t Cover

The Interop project is ultimately pragmatic. “What makes it into Interop are things that browser makers either are already working on or are about to work on,” Meyer explained. “Things that any browser maker has no intention whatsoever of working on, [they] don’t make it.”

MathML hasn’t been included in Interop because at least one of the browser makers objected to it and the group accepts vetoes. “If anyone puts in an objection, even if everyone else thinks it should be in there, it doesn’t go in unless you can convince the objector to rescind their objection.”

Those vetoes are usually about resources and priorities. “Whoever was objecting, would say something like ‘this is really cool, and we would love to work on it, but we can’t work on this and all these other things’.” That’s not an objection to the technology: it might be about a specification that’s not yet complete. “There’s no sense [in] us adding this to Interop when the specification might change halfway through the year and invalidate everyone’s work. Let’s wait until the spec is ready and then maybe next year, we can add it.”

That’s a realistic approach that underlines that the browser makers are serious about Interop, he suggested. “It’s nice to see browser teams saying ‘We have to set priorities’. They were actually thinking through ‘can we do these things’ instead of ‘sure, put it on the list and if we get to it, great’. There was none of that.”

“Our goal here is to prioritize what we think is the meaningful work that we just have to get done,” Kardell agreed.

The focus areas in Interop 2023 continue to concentrate on CSS, although they include JavaScript elements like Web Components and the catchall Web Compat category, which includes areas like regex look behind.

Partly that’s because there weren’t many submissions for JavaScript incompatibilities, Kardell told us (which may be a testament to the ECMAScript process).

But he also noted that while the web platform tests include some JavaScript tests, they don’t yet incorporate the ECMAScript Test262 test suite (new features can’t become part of ECMAScript without submitting tests to this suite), so tracking JavaScript focus areas would require doing that integration work. Some investigation has been done on how to keep the different test suites in sync, “but I don’t think we’re there yet” he suggested.

“[Web standards] are constantly a learning process,” Kardell said, pointing out that for many years those standards didn’t even include formal test suites that went beyond individual browsers.

“Our idea of how we manage all this is slowly evolving and we’re learning and figuring it out better.”

The post The Project Helping All Browsers Deliver the Same Web Platform appeared first on The New Stack.

]]>
Dev News: RedwoodJS Drops Jamstack, Dropbox Reduces JS Bundles https://thenewstack.io/dev-news-redwoodjs-drops-jamstack-dropbox-reduces-js-bundles/ Sat, 19 Aug 2023 13:00:32 +0000 https://thenewstack.io/?p=22716155

RedwoodJS, a young fullstack JavaScript and TypeScript framework, is moving beyond its Jamstack SPA (single page application) roots to pursue

The post Dev News: RedwoodJS Drops Jamstack, Dropbox Reduces JS Bundles appeared first on The New Stack.

]]>

RedwoodJS, a young fullstack JavaScript and TypeScript framework, is moving beyond its Jamstack SPA (single page application) roots to pursue a sever-first, full stack React framework, according to this post by Tom Preston-Werner. Preston-Werner, the founder and former CEO of GitHub, is one of the four founders and 300 contributors to the RedwoodJS open source web development framework.

“For the last year, the RedwoodJS team has been prototyping solutions to the framework’s lack of a proper server-side rendering (SSR) feature,” he wrote. “Today, I’m happy to announce that we have chosen to implement a modern SSR solution with a front-end server, leveraging React’s streaming capabilities. This will also allow us to add React Server Components (RSC) to Redwood as our solution to the many downsides of pure single page applications (SPAs).”

It’s a lot of acronyms, but Preston-Werner cited a list of reasons to switch to React Server Components (RSC), including:

  • Better SEO performance in the form of statically rendered HTML delivered to the browser. With server rendering, that’s “baked in,” he added — an advantage over the SPA architecture;
  • OG tags, which require statically delivered HTML to use. Again, server-side rendering solves for this use; and
  • Providing API options to connect beyond the GraphQL API backend.

“It’s challenging to get top-notch performance out of Redwood in a Jamstack environment,” Preston-Werner wrote. “AWS Lambda’s cold start times, code payload limits, and execution timeouts are all hurdles that need to be considered. Most Redwood users today already choose a serverful deployment strategy for exactly these reasons.”

The original goal was to make it possible for most of Redwood’s features to work in serverless environments. But from now on, Redwood will be optimizing for serverful RSC and all the advantages that will bring.

“You can read a full account of RSC’s advantages elsewhere, but more of my favorites are: smaller bundle sizes shipped to the browser, large libraries can be run server-side only (more bundle savings), quicker hydration, and easy server-side secrets,” he wrote. “RSC is the future of React. The React team has made this very clear and we are lucky to be in touch with their amazing team members to help us along this path.”

The Redwood team also released a new roadmap ahead of its first in-person and virtual conference, RedwoodJSConf, which is set for Sept. 26-29 in Grants Pass, Oregon.

Dropbox Reduced Its JavaScript Bundles by 33%

Wednesday, Dropbox published a post describing at length how it reduced its JavaScript bundles by 33%. Since excessive JavaScript is a known problem for some web apps and sites, we think this piece detailing how Dropbox decluttered is worth a read.

It’s first change? A new bundler. The old one from 2014 didn’t incorporate many performance optimizations and was difficult to work with, Dropbox noted.

“While our existing bundler was relatively build-time efficient, it resulted in massive bundle sizes and proved to be a burden for engineers to maintain,” the post noted. “We relied on engineers to manually define which scripts to bundle with a package, and we simply shipped all packages involved in rendering a page with few optimizations.”

That became problematic over time, it added, creating multiple versions of bundled code, manual code splitting and no tree shaking.

The cloud provider used Rollup, a module bundler for JavaScript. The rest of the blog post shares the Dropbox deployment journey.

“After rolling out Rollup to all Dropbox users, we found that this project reduced our JavaScript bundle sizes by 33%, our total JavaScript script count by 15%, and yielded modest TTVC improvements,” the post said. “We also significantly improved front end development velocity through automatic code-splitting, which eliminated the need for developers to manually shuffle around bundle definitions with each change. Lastly and perhaps most importantly, we brought our bundling infrastructure into modernity and slashed years of tech debt accumulated since 2014, reducing our maintenance burden going forward.”

Skeleton Squad Targets JavaScript Package Manager NPM

Socket Research revealed Monday that the Skeleton Squad, which targeted the PyPi ecosystem with malicious code, has also targeted the JavaScript package manager pm in its attacks.

“The latest combatant to enter the fray is an NPM package known as pyautodllxd,” Socket Search reported Monday. “This seemingly innocuous package was uploaded by an author named ‘T4hg’ and last updated on April 18, 2023.”

At first glance, ‘pyautodllxd’ doesn’t appear to impersonate any popular package or engage in typosquatting. Its purpose and target audience remain elusive, as both the ReadMe file and description were left blank. However, when Socket Research examined the postinstall command, it uncovered suspicious code.

The postinstall command runs a PowerShell command, suggesting that the attacker targets Windows operating systems, the research note pointed out.

“Upon closer inspection, we discovered a binary named ‘esquele.exe’ being downloaded from a Dropbox URL,” the post stated. “This stealthy approach allows the payload to be deployed without raising any red flags.”

After installation, the package simultaneously downloads the malicious executable and saves it in the temp folder for later execution. Socket Research noted that several vendors had marked the decoded PowerShell script as a malicious trojan.

The firm’s analysis found that pyautodllxd runs a hidden PowerShell window, downloads a script named bypass.ps1, and uses the “Esquele” function to add exclusion paths for drives C:\ and D:\, bypassing Windows Defender’s real-time protection.

The Skeleton Squad left a cryptic message in Spanish in some of the packages published by T4hg, which translates to “They will all die in the hands of EsqueleSquad,” the research note added.

Nim v2.0 Released

Nim version 2.0 released earlier this month. Nim is a relatively new programming language, but it’s used in web development, systems programming, game development, artificial intelligence, data science and scientific computing. Among its advantages are its fast and efficient: Nim code can be compiled to native machine code. It’s also expressive and extensible, supporting metaprogramming. Finally, Nim code can be compiled to run on a variety of platforms, including Windows, Linux, macOS, and FreeBSD. So it has a lot to recommend it.

“This is an evolution (not revolution) of Nim, bringing ORC memory management as a default, along with many other new features and improvements,” the release note stated.

It also cautioned that “Nim is a programming language that is good for everything, but not for everybody.” Its customizable memory management makes it well suited for unforgiving domains such as hard real-time systems and system programming in general, the post stated.

Among the new features are:

  • Better tuple unpacking. “Tuple unpacking for variables is now treated as syntax sugar that directly expands into multiple assignments,” the release note stated. “Along with this, tuple unpacking for variables can now be nested.”
  • Improved type inference. “A new form of type inference called top-down inference has been implemented for a variety of basic cases,” the releases notes state.
  • Forbidden tags. “Tag tracking now supports the definition of forbidden tags by the .forbids pragma which can be used to disable certain effects in proc types,” it added.
  • A new standard libraries model. Essentially the overhauls its os module.

New users can download the language online.

Scheme Schism

John Cowan, the chair of the language R7RS-large project, resigned his position in a public post on Google Groups this week. That project oversees the use of Scheme as an active, rather than teaching, language.

“I have come to the conclusion that I can no longer serve as Chair. I am exhausted by the effort, and I do not think that there is any further hope that I can get sufficient agreement among the different players to have any hope of coming to a conclusion,” he wrote. “On the contrary, agreement is further away than ever, and people’s views are more and more entrenched.”

This Hacker News thread offers background information about Scheme.

Web Frameworks as Superheroes

React.js Superhero

Developer Matija Sosic used the generative AI tool Midjourney to create this image.

This is simply too cute not to share: Developer Matija Sosic recently used the generative AI tool Midjourney to visualize web frameworks as superheroes. It features popular web frameworks such as Vue, React.js, Wasp and Ruby on Rails. React.js is heralded as the king of the frameworks, while Nest.js is literally a server-side beast of a character. The Wasp contributor promises to do more frameworks in the future.

The post Dev News: RedwoodJS Drops Jamstack, Dropbox Reduces JS Bundles appeared first on The New Stack.

]]>
Tailwind CSS Debate: Another Cool Tool Dissed by Web Purists https://thenewstack.io/tailwind-css-debate-another-cool-tool-dissed-by-web-purists/ Fri, 18 Aug 2023 15:10:56 +0000 https://thenewstack.io/?p=22716146

Earlier this week, Matt Rickard wrote a post entitled “Why Tailwind CSS Won,” which got to the front page of

The post Tailwind CSS Debate: Another Cool Tool Dissed by Web Purists appeared first on The New Stack.

]]>

Earlier this week, Matt Rickard wrote a post entitled “Why Tailwind CSS Won,” which got to the front page of Hacker News. Inevitably, it kicked off the latest round of polarized opinions on social media about a popular web development tool. Surprise, surprise: some people love Tailwind, and others hate it.

The reasonings for each side are also familiar: the developers who love it think Tailwind CSS saves them time and is easy to learn, while the developers who hate Tailwind think it “disrespects” the web platform. Replace Tailwind here with React, or virtually any other popular JavaScript-based tool of today, and you’ll get the same black-and-white opinions.

Where’s the Beef?

Tailwind CSS as a framework for developers is pretty easy to understand. Basically, it allows you to embed CSS styling code into your HTML code — to, as Tailwind’s tagline puts it, “rapidly build modern websites without ever leaving your HTML.” So it saves developers from having to context-switch from HTML to a CSS stylesheet.

Tailwind’s own documentation points out a common objection to this approach: “isn’t this just inline styles?” Those of you from the 1990s will remember having to add styling markup to your HTML files back in the day, before the CSS revolution took hold. But according to Tailwind, its “utility class” approach offers more functionality than inline styles — including the ability to do responsive design (mobile-friendly designs).

So ease of use — especially compared to coding and then maintaining a CSS file — and the ability to do your styling inside HTML are the primary reasons why many developers love Tailwind. In his post, Matt Rickard added “copy-and-pastable,” “fewer dependencies, smaller surface,” and “reusability” as key strengths of the framework.

As for its critics, the overall theme of their dislike for Tailwind is that it somehow “disrespects the platform it sits on,” as Jared White put it in a recent post. When I queried him about this, he pointed me to an earlier post of his that outlines his specific critiques. To quickly sum them up: he thinks Tailwind “promotes ugly-ass HTML,” he doesn’t like that “CSS files built for Tailwind are non-standard (aka proprietary) and fundamentally incompatible with all other CSS frameworks and tooling,” he believes that “Tailwind forgets that web components exist,” and, finally, he thinks it “encourages div/span-tag soup.”

In a nutshell, Tailwind has ugly markup and is non-standard — that seems to be the core complaint of Jared White and other critics of Tailwind. Jeff Sandberg mentioned similar complaints in his recent blog post arguing against Tailwind. Sandberg concluded with a larger point about the rise of Tailwind at the expense of writing CSS directly: “Tailwind is a symptom of what I feel to be a larger problem in development. There’s been a rapid deterioration in pride-of-craftsmanship in development.”

So Who’s Right…

Tailwind’s creator, Adam Wathan, has no doubt debated people many times on platforms like X/Twitter. I trawled through some of the recent threads, but this GIF he posted of Macho Man Randy Savage seems to sum up his stance:

Tailwind

It’s tempting to look at this debate about Tailwind as yet another “Cool Tool vs. Web Purists” argument (which typically means nobody will ever agree on anything the other side says).

On the one hand, I don’t blame any practicing web developer for wanting to use the easiest tool available and also one that plugs in nicely with other tools — for instance, Tailwind can be used with Next.js. This is the pragmatist approach to web development; and in some cases, developers may not even have a choice, if a project already uses Tailwind and they’ve just joined the team.

On the other hand, deviating from existing web standards (however subtly) can become a problem further down the road. If you’re no longer working directly with CSS files, and instead working with an abstraction like Tailwind, doesn’t that mean you’re less likely to understand the underlying technology?

I think Google’s Una Kravets summed it up nicely, during a recent X/Twitter debate about Tailwind. “Tailwind can be a great solution,” she tweeted in June. “The issues arise when folks think they don’t need to learn CSS if they learn Tailwind, which ultimately limits them.”

Comparing the Tailwind Debate to the React Stoush

The Tailwind debate is slightly different from the one we’ve been having over React for the past several years. There’s good evidence that React actually is harmful to the web, primarily because of the large load it puts on browsers — which can mean performance issues for many users.

The amount of unnecessary JavaScript in web pages due to React can even be seen as an ethical issue. Alex Russell from the Microsoft Edge team wrote at the end of last year that “​​sites continue to send more script than is reasonable for 80+% of the world’s users, widening the gap between the haves and the have-nots.”

In the case of Tailwind, though, there doesn’t appear to be any damage to the end user. What Tailwind’s critics are complaining about is partly the aesthetics (“ugly markup”) and partly what Tailwind is allegedly doing to the craft of web development (the non-standard approach).

Web developer Paul Scanlon had a snappy retort to the Tailwind critics when I asked him about this debate. “I’ve been writing CSS for nearly 20 years and it’s terrible and always hard to maintain, and so is yours,” he said. “Tailwind at the very least standardized what terrible looks like.”

I can attest to the difficulties in dealing with CSS files — I was recently studying the multiple CSS files of my Web 2.0 tech blog, ReadWriteWeb, and was amazed at how convoluted those files were. But that was 15 or so years ago, and CSS has improved since then. Or, at least, Jeff Sandberg thinks so. “I’ve seen other engineers, of all levels, stuck in a mire of bad CSS, and so to them maybe Tailwind seems like a lifesaver,” he wrote in his post. “But CSS is better now. It’s not perfect, but it’s better than it’s ever been, and it’s better than tailwind.”

Sandberg implores developers to give CSS “another try.” And perhaps they will, after they’ve finished their day’s paid work in the cool tools of Next.js and Tailwind.

The post Tailwind CSS Debate: Another Cool Tool Dissed by Web Purists appeared first on The New Stack.

]]>
LLM App Ecosystem: What’s New and How Cloud Native Is Adapting https://thenewstack.io/llm-app-ecosystem-whats-new-and-how-cloud-native-is-adapting/ Mon, 14 Aug 2023 18:33:16 +0000 https://thenewstack.io/?p=22715718

The developer ecosystem for AI-enabled applications is beginning to mature, after the emergence over the past year of tools like

The post LLM App Ecosystem: What’s New and How Cloud Native Is Adapting appeared first on The New Stack.

]]>

The developer ecosystem for AI-enabled applications is beginning to mature, after the emergence over the past year of tools like LangChain and LlamaIndex. There’s even now a term for AI-focused developers: AI engineer, which is the next step up from “prompt engineer,” according to its proselytizer Shawn @swyx Wang. He’s created a nifty diagram showing where AI engineers fit into the wider AI and development ecosystems:

aiengineer_diagram_swyx

Via swyx.

A large language model (LLM) is the core technology for an AI engineer. It’s no coincidence that both LangChain and LlamaIndex are tools that extend and complement LLMs. But what other tools are available to this new class of developer?

The best diagram for an LLM stack I’ve seen so far is from the VC firm, Andreessen-Horowitz (a16z). Here’s its view of an “LLM app stack”:

a16z_emerging_llm_stack

Via a16z; Click image to view full-size.

The All-Important Data Layer

Needless to say, the most important thing in an LLM stack is the data. In a16z’s diagram, that’s the top layer. The “embedding model” is where the LLM comes in — you can choose from OpenAI, Cohere, Hugging Face, or one of a few dozen other LLM options, including the increasingly popular open source LLMs.

But even before you get to LLMs, a16z implies that you need to set up a “data pipeline” — it lists Databricks and Airflow as two examples, or you could just go “unstructured” with your data. Not mentioned by a16z, but I think it fits into this part of the data cycle, are tools that help enterprises “clean” or simply curate data before it is fed into a custom LLM. So-called “data intelligence” companies like Alation offer this type of service — it’s a cousin of the better-known “business intelligence” category of tools in the enterprise IT stack.

The final part of the data layer is a class of tools allowing you to store and process your LLM data — the vector database. According to Microsoft’s definition, this is “a type of database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes.” The data is stored as a vector via a technique called “embedding.”

When I spoke to leading vector database vendor Pinecone back in May, they pointed out that its tool is often used alongside data pipeline tools, like Databricks. In such cases, the data usually resides elsewhere (a data lake, for instance) and is then transformed into embeddings by running it through a machine-learning model. After processing and chunking the data, the resulting vectors are sent to Pinecone.

Prompts and Queries

The next two layers can be summarized as prompts and queries — it’s where an AI application interfaces with an LLM and (optionally) other data tools.

A16z positions both LangChain and LlamaIndex as “orchestration frameworks,” meaning tools that developers can use once they know which LLM they are using.

According to a16z, orchestration frameworks like LangChain and LlamaIndex “abstract away many of the details of prompt chaining,” which means querying and managing data between an application and the LLM(s). Included in this orchestration process is interfacing with external APIs, retrieving contextual data from vector databases, and maintaining memory across multiple LLM calls.

The most intriguing box in a16z’s diagram is “playground”, which includes OpenAI, nat.dev and Humanloop. A16z doesn’t define what this is in the blog post, but we can deduce that “playground” tools help the developer do what a16z calls “prompting jiu-jitsu.” These are places where developers can try various prompting techniques.

Humanloop is a British company and one of the features of its platform is a “collaborative prompt workspace.” It further describes itself as a “complete developer toolkit for productionizing your LLM features.” So basically it allows you to try LLM stuff out and then deploy it to an application if it works. (I’ve reached out to the company to set up an interview, so I will be writing more about this separately.)

LLM Ops

To the right of the orchestration box are a host of operational boxes, including LLM cache and Validation. There are also a bunch of cloud and API services related to LLMs, including open API repositories like Hugging Face, and proprietary API providers like OpenAI.

This is perhaps where the stack most resembles the developer stack we’ve become accustomed to in the “cloud native” era, and it’s no coincidence that a number of DevOps companies have added AI to their list of offerings. In May I spoke to Harness CEO Jyoti Bansal. Harness runs a “software delivery platform” that focuses on the “CD” part of the CI/CD process [continuous integration and continuous delivery/continuous deployment].

Bansai told me that AI can alleviate the tedious and repetitive tasks involved in the software delivery lifecycle, starting from generating specifications based on existing features, to writing code. Also, he said that AI can automate code reviews, vulnerability testing, bug fixing, and even the creation of CI/CD pipelines for builds and deployments.

AI is also changing developer productivity, according to another conversation I had in May. Trisha Gee from Gradle, the build automation tool, told me that AI can accelerate development by reducing the time spent on repetitive tasks — like writing boilerplate code — and enabling developers to focus on the bigger picture, such as ensuring the code meets business requirements.

Web3 Is Dead, Long Live the AI Stack

What we’ve seen so far in the emerging LLM developer stack is a bunch of new product types — such as the orchestration frameworks (LangChain and LlamaIndex), vector databases, and “playground” platforms like Humanloop. All of them extend and/or complement the underlying core technology of this era: large language models.

But we’ve also witnessed nearly all companies from the cloud native era adapting their tools to the AI Engineer era. That augers well for the future evolution of the LLM stack. The phrase “standing on the shoulders of giants” springs to mind: the best innovation in computer technology invariably builds on what came before. Perhaps that’s what undid the failed “Web3” revolution — which wasn’t so much building atop the previous generation, as trying to usurp it.

This new LLM app stack is different; it’s a bridge from the cloud development era to a newer, AI-based developer ecosystem.

The post LLM App Ecosystem: What’s New and How Cloud Native Is Adapting appeared first on The New Stack.

]]>
Dev News: Svelte 5 vs. VanillaJS and Google’s Project IDX https://thenewstack.io/dev-news-svelte-5-vs-vanillajs-and-googles-project-idx/ Sat, 12 Aug 2023 15:00:10 +0000 https://thenewstack.io/?p=22715605

Rich Harris, Svelte’s creator, says Svelte 5 is going to be radical, and as proof he offered a chart showing

The post Dev News: Svelte 5 vs. VanillaJS and Google’s Project IDX appeared first on The New Stack.

]]>

Rich Harris, Svelte’s creator, says Svelte 5 is going to be radical, and as proof he offered a chart showing how many lines of code it creates for various functions versus Svelte 4 and, more significantly, vanilla JavaScript. For most of the functions he lists, it runs neck-and-neck with vanilla JavaScript.

We’re thinking this shift is made possibly by the promised code modernization he told The New Stack about earlier this year.

The Svelte team is switching the underlying code from TypeScript to JavaScript — no, you read that right, that’s actually what he said. We double-checked after several developers told us that couldn’t be right.

“People are getting confused because they assume that JavaScript means ‘no types,’ but what we’re actually doing is converting our .ts files to .js files with type annotations in JSDoc comments rather than TypeScript syntax,” Harris told The New Stack. “This gives us equivalent type safety but eliminates the friction normally associated with things like TypeScript.”

Google’s Project IDX to Offer Multiplatform App Development

Google announced a new browser-based development “experience” that’s built on Google Cloud and leverages Codey, a foundational AI model trained on code and built on PaLM 2. It also supports Next.js deployment via a Firebase Hosting integration.

Google did not specify whose code or where it comes from in the blog post.

Project IDX leverages the AI Codey, which as of now supports C++, Go, Java, JavaScript, Kotlin, Python, Ruby and TypeScript, among others.

It’s also built on Visual Studio Code using Code OSS, which should give you some idea of where they’re headed with this. It has pre-baked templates for Angular, Flutter, Next.js, React, Svelte, Vue and languages such as JavaScript, Dart and soon Python and Go, the post stated.

“At the heart of Project IDX is our conviction that you should be able to develop from anywhere, on any device, with the full fidelity of local development,” the blog post stated. “Every Project IDX workspace has the full capabilities of a Linux-based VM, paired with the universal access that comes with being hosted in the cloud, in a data center near you.”

Project IDX includes a built-in web preview and there are plans to add an Android emulator and an embedded iOS simulator.

It integrates Firebase Hosting, which allows developers to deploy a shareable preview of a web app or to deploy to production.

“And because Firebase Hosting supports dynamic backends, powered by Cloud Functions, this works great for full-stack frameworks like Next.js,” the blog post added. That functionality put it neatly in place to compete with Vercel and Netlify, which also support Next.js deployments.

Vercel Launches Next.js Commerce 2.0

Speaking of Vercel and Next.js, the frontend development company released Next.js Commerce 2.0 Monday and it’s all about speed.

In a blog post, the company points out e-commerce sites took a hit with Google when page experience became a ranking factor in search results. Amazon, for instance, found that just 100 milliseconds of extra load time cost the e-tailer 1% in sales. There are a lot of reasons, including personalization, images and videos, for why this is a hard problem for e-commerce sites to solve, the blog post explained.

The updated solution leverages Next.js 13 and introduces an app router, it noted, to create storefronts that feel static but are completely dynamic.

It includes a dynamic storefront, simplified architecture (a single provider per repository), which results in less code, the post notes. There’s also a new e-commerce accelerator template, which features best patterns for building composable commerce applications, including support for BigCommerce, Medusa, Saleor, Shopify and Swell.

TypeScript 5.2 Release Candidate Now Available

Microsoft announced the TypeScript 5.2 RC on Tuesday. So what’s new? Maintainer Daniel Rosenwasser walked through the updates, which includes:

  • Using declarations and explicit resource management, which is designed to cut down on the “noise” created in code when cleaning up after creating an object;
  • Decorator metadata. This is an upcoming ECMAScript feature that makes it easy for decorators to create and consume metadata on any class they’re used on or within;
  • Named and anonymous tuple elements. Tuples are used to store multiple items in a single variable. TypeScript previously had a rule that tuples could not mix and match between labeled and unlabeled elements. With this update, TypeScript can preserve labels when spreading into an unlabeled tuple.
  • Easier method usage for unions of arrays. “In previous versions on TypeScript, calling a method on a union of arrays could end in pain,” Rosenwasser wrote. “In TypeScript 5.2, before giving up in these cases, unions of arrays are treated as a special case. A new array type is constructed out of each member’s element type, and then the method is invoked on that.” The long and short of it is that methods like filter, find, some, every and reduce should all be invokable on unions of arrays in cases where they were not previously.
  • Type-only import paths with TypeScript implementation file extensions. This means developers can now write import-type statements that use .ts, .mts, .cts, and .tsx file extensions. It also means that import() types, which can be used in both TypeScript and JavaScript with JSDoc, can use those file extensions.
  • Comma completions for object members. “TypeScript 5.2 now gracefully provides object member completions when you’re missing a comma,” Rossenwasser wrote. “But to just skip past hitting you with a syntax error, it will also auto-insert the missing comma.”
  • Inline variable refactoring. Using the “inline variable” refactoring will eliminate the variable and replace all the variable’s usages with its initializer, he explained, adding that this may cause that initializer’s side effects to run at a different time, and as many times as the variable has been used.”
  • Optimized checks for ongoing type compatibility; and
  • Breaking changes and correctness fixes.

The post Dev News: Svelte 5 vs. VanillaJS and Google’s Project IDX appeared first on The New Stack.

]]>
Is Jamstack Toast? Some Developers Say Yes, Netlify Says No https://thenewstack.io/is-jamstack-toast-some-developers-say-yes-netlify-says-no/ Wed, 09 Aug 2023 15:49:14 +0000 https://thenewstack.io/?p=22715327

When Netlify acquired one of its former competitors, Gatsby, in February, I noted that its use of the term “Jamstack”

The post Is Jamstack Toast? Some Developers Say Yes, Netlify Says No appeared first on The New Stack.

]]>

When Netlify acquired one of its former competitors, Gatsby, in February, I noted that its use of the term “Jamstack” (which it coined in 2016) wasn’t so prominent in its marketing anymore. “Composable architectures” appeared to be the new catchphrase. Fast-forward six more months, and Netlify has just closed The Jamstack Community Discord, according to Jamstack aficionado Brian Rinaldi (who runs an email newsletter called — for now — JAMstacked).

Rinaldi added that Netlify has “largely abandoned” the term, “in favor of a “composable web” term that better aligns with their ambitions around becoming a broader enterprise platform including content (with tools like Netlify Connect).” Another developer who has heavily used Jamstack over the past 7-8 years, Jared White, considers the name “all but dead” now.

So is Jamstack dead or not? To find out from the horse’s mouth, I messaged Netlify CEO Matt Biilmann.

“Very much not dropping the term or declaring the architecture gone!” he wrote back, adding that “the Jamstack architecture has won out to a degree where there’s little distinguishing ‘Modern Web Architecture’ from Jamstack architecture.”

In a tweet, he clarified that “basically all modern web frameworks ended up being built around self standing front-ends talking to API’s and services.”

Paul Scanlon, a developer who works for CockroachDB (and is also a tutorial writer for The New Stack) agrees with Biilmann.

“Jamstack, in terms of the word or definition, might be “dead”, but the principle lives on,” he told me. “Web development prior to Jamstack very much existed with front end and backend being separate things, with developers working on either side of the stack. Jamstack not only merged the technologies to form a collapsed stack, but it meant developers naturally became full stack.”

Whether or not the term “Jamstack” is still relevant, Biilmann admits that the company is re-focusing its marketing efforts.

“So the architecture is more alive than ever and has won out to the degree that for us as a company, we are now more focused on marketing around how to help large enterprises at scale modernizing their web infrastructure, rather than convincing individual web teams to adopt a Jamstack approach,” he said.

The Rise and Plateau of Jamstack

Regardless of whether Jamstack “won,” it’s clear its popularity has plateaued. But why? To answer that, we first have to go back a few years.

I first wrote about Jamstack in July 2020, soon after I joined The New Stack. I interviewed Biilmann about a trend that was at the time styled “JAMstack” — the “JAM” referred to JavaScript, APIs and Markup; the “stack” part referred to cloud computing technologies.

I quickly learned that the acronym itself wasn’t particularly meaningful. It’s not so much the components of JAMstack that make it interesting, I wrote in 2020, “It’s that the approach decouples the frontend of web development from its backend.”

The early promise of JAMstack for developers was that it would make their lives easier, by allowing them to create simple HTML files using a “static-site generator” (like Gatsby or Hugo), call APIs using client-side JavaScript, and deploy using git (typically to CDNs — content delivery networks).

Netlify didn’t do all of this itself (especially the static file part), which is why it wanted to create an ecosystem called JAMstack. But it had a significant footprint in that ecosystem, by enabling developers to access APIs and deploy those static files. As Biilmann himself told me in 2020, “We [Netlify] take all of the complexity of building the deployment pipelines, of running the infrastructure, of managing serverless functions, of all of that, [and] we simply abstract that away from you.”

However, as the years rolled by, the Jamstack ecosystem seemed to increase in complexity — largely due to the ever-increasing popularity of React and its attendant frameworks. As Jared White explained in his post, “JAMstack eventually gave rise to a rebranded “Jamstack” with the major value prop being something rather entirely different: you could now build entire websites out of JavaScript libraries (aka React, or maybe Vue or Angular or Svelte) and JavaScript frameworks (aka Next.js, Gatsby, Nuxt, SvelteKit, etc.).”

So Is Jamstack Dead or Alive?

It’s fair to say that the term “Jamstack” (as it’s now styled) has become rather muddled. As Brian Rinaldi pointed out in his post, “the definition has continued to shift to accommodate new tools, new services and new platform features.” At the beginning of this year, Rinaldi wrote that “Jamstack has become more of a “community” than a set of architectural rules.”

Certainly, Netlify itself isn’t pushing the term as much as it used to. Jamstack now only barely features on Netlify’s homepage, way down the bottom in the form of two legacy menu items (“Jamstack Book” and “Jamstack Fund”). The word “composable,” by contrast, features twice at the very top of the page — including in its new catchphrase, “The future is composable.”

“Composable is a broader term that becomes more relevant when we’re talking to architects at large companies that are not just thinking about the web layer, but how to organize the underlying architecture as well,” Biilmann said when I asked him about the new term.

That’s fair enough, but what do practicing web developers think of Jamstack now? Jared White, for one, is ready to move on. “What Netlify gave us originally was a vision of how to deploy HTML-first websites easily via git commits and pushes, just like Heroku had done for dynamic applications,” he concluded. “All we need now is a modern Netlify/Heroku mashup that’s cheap, stable, and doesn’t need to reinvent the damn wheel every year.”

Paul Scanlon thinks the guiding principles of Jamstack are still relevant, although he sees little use for the term itself. “Does it even matter? I’m a Flash Developer, Flash died a long, long time ago and I’m still here. The guiding principles behind anything that move us forward will always remain. The buzzwords likely won’t.”

For his part, Rinaldi says that “the term seems to be dead but the tools and technologies it encompassed are still very much alive.” He plans to re-brand his JAMstacked newsletter but hasn’t yet decided on a replacement name.

The post Is Jamstack Toast? Some Developers Say Yes, Netlify Says No appeared first on The New Stack.

]]>
What Does It Mean for Web Browsers to Have a Baseline https://thenewstack.io/what-does-it-mean-for-web-browsers-to-have-a-baseline/ Tue, 08 Aug 2023 12:00:48 +0000 https://thenewstack.io/?p=22715122

For users, the promise of the web is simplicity — you don’t have to install anything, just type in a

The post What Does It Mean for Web Browsers to Have a Baseline appeared first on The New Stack.

]]>

For users, the promise of the web is simplicity — you don’t have to install anything, just type in a URL or search. But for developers, it’s about reach and portability — and that requires strong compatibility and interoperability between browsers.

But even with Internet Explorer gone, the top frustrations that shows up in survey after survey of web developers are all about the lack of compatibility: avoiding features that don’t work in multiple browsers, making designs look and work the same way in different browsers, and testing across multiple browsers. “Making things work between browsers is their biggest pain point,” Kadir Topal, who works on the web platform at Google, told The New Stack.

For the last few years the Interop project (and the Compat 2021 effort that preceded it) have helped to reduce and eventually remove a number of these compatibility pain points. But even if they know about focus areas targeted for improvement through Interop, web developers aren’t likely to keep checking the Web Platform Tests dashboards when they’re deciding what features to use on a site, let alone follow the various draft and approved stages of specifications on their sometimes slow progress through standards development, even with the W3C’s fairly comprehensive list of browser specifications.

Despite the name, Chrome Platform Status covers more than one browser; but entries aren’t usually updated after a feature ships in Chrome, so you can’t rely on the compatibility details to stay current. Apple no longer publishes a WebKit status page, although you can look up its position on various proposed web standards; Mozilla keeps a similar list of its own positions on specifications, but both are mainly a glimpse of the future. Developers can check the CSS database and bug trackers for Chromium and Firefox, look up polyfills at Polyfill.io, or check on feature status on MDN and caniuse.com.

Or they might just stick with what they know works already. If we don’t want to lose out on the promise of the web platform with evergreen browsers and living standards, how do we make it easier for web developers to know which web platform features are ready for mainstream use?

Setting a Baseline

Because different browsers are developed and updated on their own schedule, there’s no one moment when everything in a standard becomes available universally. Safari 16.4 was a major release with a long list of new features — some of which have been supported in other browsers for five or more years.

Release notes might attract some attention, but if developers hear about an interesting new feature in a conference talk or a blog and look it up only to find it works in only one browser, the excitement about it can easily dissipate. Developers want to know what works everywhere, and even when features are in multiple browsers, “they’re often available with bugs and inconsistencies, and therefore developers often deem them impossible to use in production,” warned Topal, who worked on MDN for many years.

What it adds up to is that while the caniuse site is invaluable, “developers are unclear on what is actually available on the platform”.

Baseline is a project from the WebDX Community Group that attempts to remove that confusion, “making it really clear to developers what they can and cannot use on the web platform” by listing the set of features that are available in all the major browsers and (in future) making it easier to track new features that are under development.

Rather than adding features as they get released, which could turn into just one more thing to try and keep track of, the list will be compiled once a year. “We’re hoping that once a year we can do this cut of the platform and say, ‘this is the web platform baseline 2023, 2024 or 2025’. Then each year we can talk about what’s new: what are the things that you can use now that you couldn’t use before, not just that they’ve landed in a browser, but are actually available to you because they are already widely available.”

The criteria for a feature to be included in an annual baseline is actually stricter than for most web standards, which require only two implementations: baseline features have to be supported in the current and previous version of Chrome, Edge, Firefox, and Safari. “The idea of Baseline is to provide a clear signal for when something is broadly available, and shouldn’t end up causing any problems, rather than just leaving it up to developers to work it out,” explained Mozilla engineer James Graham.

Baseline isn’t a return to waterfall engineering, by deciding in advance what features will be in the next year’s web platform, or an attempt to force all the browsers to coordinate the features they ship, Rick Byers, director on Google’s web platform team, noted. It just records what features are actually broadly available in browsers, in a way that’s easy to spot in documentation or highlight in a blog. “It’s breaking the assumption that the pace of developer understanding has to match the pace of standards development.”

Communicating to busy developers has been the missing piece of standards development. “As browser vendors, we’ve been focusing a lot on the things that we ship in our own browsers, but for developers what really matters […] is what is it that they can use now,” Topal said. “Once features are available across browsers, and once they are interoperable, we still need to go out there and make sure that developers are aware of them. Because for the longest time we basically trained developers to look at features that land in browsers as things that they might be able to use in a decade from now.”

Web standards are changing quickly and there’s still plenty of experimentation pushing it forward, but the platform is also getting less fragmented, he maintained. “Now that we have more collaboration between browsers and things are shipping faster across browsers and in a more interoperable way, we also need to change the mindset of developers that the web platform is actually moving forward.”

“Baseline is one way for us to get that across in a way that’s not chaotic.”

Google will be using the Baseline logo in articles and tutorials on web.dev, but perhaps more importantly it will also be on MDN and — hopefully by the end of 2023 — on caniuse. There will also be widgets that make it easy to include the Baseline status of a feature in a blog or other documentation.

One of the first MDN pages to highlight the Baseline status of a feature

One of the first MDN pages to highlight the Baseline status of a feature.

“We’re excited to be displaying Baseline support on relevant MDN pages. Through our research, we found web developers lack a quick and reliable way to see the status of features. And while our browser compatibility tables are useful and accurate, they are detailed and more suited to a developer’s deeper support research,” Graham noted. “It’s still early days, but we’re looking to roll it out further over the next few months. This will allow us to gain feedback from our users to make sure it’s a useful and relevant feature for them.”

So far, the Baseline information is only on a few MDN pages, and not even on all of the pages documenting some recent features Google calls out as qualifying for Baseline status. Partly that’s because it takes time to add the information to MDN (and MDN contributors like the Open Web Docs project), and for the caniuseteam to integrate it, but he also added, “Discussions about exactly how to decide when a feature meets the bar of being broadly available are ongoing.”

“The point of Baseline is to make it clear when features are safe to use without worrying about running into bugs and compatibility issues,” he explained.

Baseline or Lowest Common Denominator?

There’s always a tension between making information clear enough to grasp quickly and detailed enough to be useful.

The caniuse site doesn’t give developers the yes or no answer they might be looking for. But the browser landscape is equally complex and not everyone updates to the latest browsers as soon as they ship — or uses the four main browsers that will be covered by the annual Baseline feature list. A commercial website or web application may be able to dictate what browsers customers can use with it. But a government department or a service provider building a website will need to support a very wide range of users and devices, and may need to use polyfills and progressive enhancement to cover all the browsers they need features to work with.

“Developers have situated needs regarding interoperability, which is why tools like caniuse are so helpful,” Alex Russell, Partner Product Manager on Microsoft Edge, cautioned. Sometimes you need the extra detail. “Caniuse allows developers to import their logs to weight feature availability to represent their user’s needs with higher fidelity than a general-purpose lowest-common-denominator view can provide.”

You can pair the compatibility matrix on caniuse with usage statistics for a detailed view of where a particular feature will and won’t work — IWantToUse has a friendly interface for doing that for multiple features — but even so developers won’t always find the information they need, Graham pointed out.

“In some cases, the specific APIs you’re interested in don’t directly map to something in caniuse, so you need to look at the MDN browser compatibility tables and work out for yourself whether the users you’re hoping to support are likely to have access to the feature.”

That compatibility data is on individual MDN pages, so developers have to check one API at a time — or run a query against the data in the browser-compat-data repo and the W3C’s browser implementation status repo, which adds in Chrome Platform Status and caniuse data but still isn’t a comprehensive list of all web features.

These different resources don’t always match up completely. BCD covers some 14,000 features — down to API interfaces and CSS properties — while caniuse has a higher level list of around 520 features and the 2,200-odd entries in Chrome Platform Status are a mix of both; but from the viewpoint of people building a browser rather than a web site, so there might be separate listings for different interfaces in an API like FileReader.

“All sites are different since they have different needs and audiences, so it’s hard to pick a line that works for everyone all of the time,” he noted. Baseline may have less detail but it will also be much simpler for developers to keep track of.

“The aim is that we get to a place where developers trust that if something’s in Baseline, they feel confident to go ahead and use it for any kind of website that doesn’t have really unusual compatibility requirements. And by putting it directly on MDN, we hope that developers are able to learn when features have reached that threshold of usefulness much faster than they do with the current processes.”

Priorities and Politics

One of the biggest advantages of the Baseline project may be the opportunity to make more web developers familiar with the cycle that moves the web platform forward — features emerging in origin trials in browsers; being tested, stabilized, standardized and made interoperable through projects like Interop, with features that score well enough on interoperability graduating into each year’s baseline.

Subgrid is a good example of that pipeline. Currently, it’s not something most developers can use. “Features like subgrid that haven’t shipped everywhere — subgrid isn’t in stable Chromium even though it’s been in Gecko and WebKit for a while — are really hard to use on mainstream sites without causing problems for users,” Graham cautioned. But it’s also a focus area in Interop 2022 and continuing in 2023 to make sure it ships as an interoperable feature. “The hope is that once features ship, they’re already in a usable state and so developers are able to use them on production sites much sooner than they could in the past. This in turn should mean things reach Baseline much sooner than they might have historically.”

Indeed, subgrid is now starting to ship in browsers, Topal said. “Next year it’s going to be in Baseline: it’s going to be widely available and we’re going to talk about it again because that’s when most developers, most of the time, will be able to use it.”

Knowing the cycle works could encourage more developers to bring up their interoperability and compatibility issues in the open bug tracker that feeds each year’s Interop priorities.

But it’s also important that a browser baseline doesn’t limit developers to only consider features that all the browsers agree on, in a way that holds the web back if some of the browser makers fall behind on features that don’t make it into the Interop focus areas. Baseline can’t be a “good enough” bar that allows browser makers to skate on delivering further progress.

For all the community positivity around Interop and the advantage of having the most influential browsers involved and making commitments to fully develop and support features, the price of that involvement is that they also have a veto. And while the bug tracker and the web platform test results are public, the governance of the process for reaching consensus and committing to the focus areas each year isn’t as open.

That underlines Interop’s complicated balancing act: getting browser vendors, who nearly all also have other platform interests beyond the web, to commit to moving the web platform together compatibly is an enormous achievement, but the process has to accommodate the various commercial pressures they all face to keep them involved.

As well as driving improvements in web platform test scores across all the main browsers, Interop (and the web platform test suite that underlies the project) has clearly helped draw more attention to the importance of compatibility and interoperability between browsers. Last year, the HTTP Archive’s Web Almanac included a section on interoperability for the first time, and Baseline is a continuation of this new focus.

But arguably, the reason we’re now seeing much faster progress in browsers like Safari (where Apple has hired a much larger team in recent years and is updating the browser far more frequently) is due not just to Interop providing a way for browser makers to jointly set priorities for improving compatibility, but also to the impact of regulators (like the UK’s CMA and the Japanese equivalent) investigating competition in mobile ecosystems and what part browsers play in that.

In the end, the continuing success of Interop likely depends on correspondingly continuing pressure from web users, developers and regulators demanding a web platform that is powerful and compatible. Broader participation in Interop, perhaps driven by developer awareness as part of the Baseline project, could help. “The thing that I would like to happen next time for Interop 2024 is for more people to know about the process,” Daniel Ehrenberg, vice president of Ecma (parent organization of the TC39 committee that standardizes JavaScript) told The New Stack.

Alongside Baseline, the WebDX group is also involved in research like the State of CSS and State of JS surveys, along with short surveys on MDN: “They’re really quick to fill out, and limited in scope, so that we can get input from people who don’t necessarily have the time to spend on a longer form feedback process,” Graham explained.

All that will feed forward into Interop 2024 by identifying the things on the web platform that need acceleration, Topal suggested.

“Instead of ad hoc asking about things that we could do in Interop, what we want to get to is a shared understanding of developer pain points between browser vendors. Even though we all already have individual product strategies, we’re still addressing the same audience. It’s the same web developers. We want to get on the same page since we own this platform together, we maintain this platform together. We want to make sure that we together have a shared understanding of the developer pain points.”

New Ways of Creating Standards?

What’s also interesting about Baseline is that like async context and Open UI, it’s emerging from a W3C community group rather than a standards body.

Since the days of HTML5, the WHATWG, W3C and (to a lesser extent) ECMAScript approach of “paving the cowpath” by codifying the most common patterns found on websites in the standards for browsers has meant that standards often reflect patterns adopted because browsers already support them.

Open UI and WinterCG incubate draft proposals that are brought to those standards bodies for consideration, aligning more with the Origin Trials that Chrome and Edge use for features they want to bring to the web that solicits developer feedback and produce tests and specifications.

Separating design from standardization like this can have the advantage of working faster — and failing faster — than a formal standard process, with a tighter feedback loop with the developers who are interested in a new feature. Iteration and experimentation in a community group can preserve momentum even when ideas don’t work out the first time. It also avoids everyone getting stuck with the first implementation of a feature when that turns out to have design flaws that can’t be changed because developers have already taken dependencies on them.

The Web DX community group includes not just Interop participants Apple, Google, Microsoft and Mozilla, along with Igalia and Boucoup, but organizations like LG who aren’t as well known for making a browser. “It’s a new era of collaboration on the web platform,” Byers suggested.

Having Baseline emerge from a community focused on developer experience should help it become something that’s useful for developers, rather than something that lets web browser makers pat themselves on the back for how well they’re doing, and likely means we’ll see iterations in the way the annual Baseline is decided on and what it includes over time. If it takes off, it could add another level to the way the web platform creates and adopts the standards that make it powerful.

The post What Does It Mean for Web Browsers to Have a Baseline appeared first on The New Stack.

]]>
Stack Overflow Adds AI: Will the Community Respond? https://thenewstack.io/stack-overflow-adds-ai-will-the-community-respond/ Mon, 07 Aug 2023 14:24:36 +0000 https://thenewstack.io/?p=22714926

Stack Overflow has a bit of a love/hate relationship with generative AI. It made tech headlines in December for a

The post Stack Overflow Adds AI: Will the Community Respond? appeared first on The New Stack.

]]>

Stack Overflow has a bit of a love/hate relationship with generative AI. It made tech headlines in December for a short-term ban on ChatGPT answers after finding many were outright incorrect. Then its own developer survey revealed that 44% of developers already use AI tools for development and another 26% plan to do so soon. In June, moderators on the site launched a strike, in part due to the company’s policy on allowing an AI post.

In July, some claimed that AI has contributed to a decline in the community site’s use. It’s even become the topic of a research paper, “Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow.”

The latest turn of events in the AI/Stack Overflow saga: The company last week announced it will roll out its own AI solution called OverflowAI. Oh, and plot twist: Stack Overflow will use AI to solve one of the Stack Overflow pain points, which is that when beginner developers ask entry-level questions, they are often harangued by their peers.

As computer scientist Santiago Valdarrama remarked in a tweet, “I don’t remember the last time I visited Stack Overflow. Why would I when tools like Copilot and ChatGPT answer my questions faster without making me feel bad for asking?”

It’s a problem Stack Overflow CEO Prashanth Chandrasekar acknowledges because, well, he encountered it too.

“When I first started using Stack Overflow, I remember my first experience was quite harsh, because I basically asked a fairly simple question, but the standard on the website is pretty high,” Chandrasekar told The New Stack. “When ChatGPT came out, it was a lot easier for people to go and ask ChatGPT without anybody watching.”

Semantic Search and Teams Integrations

OverflowAI adds semantic search, built on top of a vector database. OverflowAI then serves up the information from the site’s knowledge base without querying the community directly. Stack Overflow is also adding semantic search to its enterprise Stack Overflow for Teams solution, although that’s in a private beta.

“We really wanted to make sure that we were grounding people in one, trust — trusted answers … and citations of those answers, where they’re coming from,” Chandrasekar said.

In fact, most of the functionality of OverflowAI is aimed at the enterprise tool, such as the ability to ingest enterprise knowledge from existing enterprise content, creating a tagged knowledge base based on a company’s own information. Users will be able to vote, edit and comment on content.

“In essence, the AI efficiently bootstraps your Stack Overflow community, allowing you to take advantage of key documents in repositories that are not being discovered and reused,” Chandrasekar wrote in the announcement post. That could mean documents, wikis, GitHub README files, he told us.

The Teams tool is being integrated with Overflow’s new StackPlusOne chatbot.

Additionally, for Teams, there’s a Slack integration so it can answer questions from the organization’s knowledge base or from Stack Overflow’s community, and do so directly within the organization’s Slack.

But what may be of more interest to developers is that Stack Overflow is now offering an IDE (integrated development environment) extension for Visual Studio Code that will be powered by OverflowAI. This means that coders will be able to ask a conversational interface a question and find solutions from within the IDE.

Stack Overflow also is launching a GenAI Stack Exchange, where the community can post and share knowledge on prompt engineering, getting the most out of AI and similar topics.

Finally, Stack Overflow’s Natural Language Processing (NLP) Collective will now include a new feature called Discussions. This will provide a focused space to debate technical approaches, explore implementation strategies and share different perspectives, Chandrasekar wrote.

Users can sign up to be part of the OverflowAI preview.

As for code generation, Chandrasekar said that’s not a goal for its AI.

“Code generation, to a degree, is commoditized,” he said. “We wanted to do what we really do best, which is really zone in on the best knowledge base on how to do things correctly and to the surface all that 58 million questions and answers as a pair programmer, or an assistant on the right that shows you all this on this context.”

So far, response has been positive, Chandrasekar said. But will it work to resolve some of the challenges created by AI, such as the moderator strike? The New Stack tried to obtain a response from the striking moderators about whether the OverflowAI announcement changed anything — a request was posted for The New Stack in their Discord. However, as of press time, no representative had responded.

The post Stack Overflow Adds AI: Will the Community Respond? appeared first on The New Stack.

]]>