WebAssembly Overview, News & Trends | The New Stack https://thenewstack.io/webassembly/ Fri, 22 Sep 2023 19:00:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 What Is WebAssembly? https://thenewstack.io/what-is-webassembly-wasm/ Mon, 25 Sep 2023 11:00:08 +0000 https://thenewstack.io/?p=22718996

WebAssembly, or Wasm, is a cornerstone for many modern web-based applications. But what exactly is WebAssembly? Initially conceived to bring

The post What Is WebAssembly? appeared first on The New Stack.

]]>

WebAssembly, or Wasm, is a cornerstone for many modern web-based applications. But what exactly is WebAssembly? Initially conceived to bring near-native performance to web applications, Wasm provides a compact binary format that serves as a compilation target for key programming languages like C++ and SQL.

Whether you’re enhancing web-based games, embedding real-time video editing into your application, or pushing the boundaries of 3D rendering, Wasm stands out as a driving force behind these advancements. To help you, this article will go over what every beginner should know about WebAssembly, what it’s generally used for, and what the future may hold for this promising technology.

The Core of WebAssembly

When you hear WebAssembly, try to think of it as a bridge of sorts — a bridge between the potential of high-performance applications and the open platform of the modern web.

Unlike traditional JavaScript, which is interpreted or JIT-compiled, WebAssembly code is delivered in a low-level binary format, making it both compact in size and swift in execution. This binary format is specifically designed for modern CPUs, ensuring that code runs at near-native speed.

Why Binary?

Wasm’s binary format serves a dual purpose — first off, it means that applications can be much smaller in size, leading to quicker downloads and reduced load times. This increased efficiency is particularly crucial for web apps, where every millisecond of loading time can make a difference to user experience and retention.

Also, because it’s much closer to machine code, the computational tasks are executed faster, bringing applications on the web closer in performance to their native counterparts.

High-Level Languages as Catalysts

With Wasm, devs are no longer restricted to JavaScript alone; they can write complex code in primary programming languages like C, C++, Rust, and more, which is then compiled into Wasm’s binary format, allowing the application to run on the web browser.

For devs of all experience levels, this is absolutely monumental. It means they can now bring performance-heavy applications — like video editing tools or 3D games — directly to the browser without the need for plugins or third-party app stores. On top of that, they can reuse existing codebases — making the transition to the web smoother and more cost-effective.

Security and Sandboxing

WebAssembly’s architecture is engineered to keep security at its core, offering a secure execution environment and vital features like memory safety.

The execution environment operates in a “sandbox,” limiting the potential for security vulnerabilities. This means that even if there’s potentially harmful code, its effects are contained and it can’t harm the user’s system. This is crucial in a threatscape where dangers are manifold — from malware to tax identity theft.

WebAssembly employs a multi-layered security approach to counter cybercriminals who are targeting financial and tax details. By executing code in a confined environment, WebAssembly minimizes the risk of unauthorized access to sensitive information — such as your Social Security number or geolocational data. Encryption protocols can be easily implemented in a WebAssembly module, too, further securing the transfer and storage of vital information.

Therefore, businesses and end users can engage in online activities with greater peace of mind, knowing that WebAssembly’s robust security infrastructure is actively working to mitigate the risks associated with online scams and threats.

Diverse Use Cases of WebAssembly

As Wasm continues to carve its niche in programming, examples of how it’s revolutionizing the web are becoming more plentiful by the day. The following use cases illustrate Wasm’s versatility and potential.

Gaming

The gaming industry is witnessing a transformative phase, with browser-based games no longer being simple or rudimentary. Thanks to WebAssembly, devs can now bring graphics-intensive games — which were previously reserved for dedicated gaming consoles or powerful PCs — directly to web browsers.

Multimedia Applications

Multimedia applications are vast in scope, encompassing video editing, image processing, audio manipulation, and more. WebAssembly facilitates the creation of browser-based tools that closely mirror their desktop counterparts in terms of functionality and performance. Imagine complex tasks like video rendering, real-time audio processing, or even 3D model design, all being executed smoothly within your browser.

Scientific Computing

In fields like biology, physics, and engineering, computational tasks can be heavy and demanding. Simulations, modeling and data processing, which once required dedicated software and hardware, are now being made available on the web. Wasm can handle the computational intensity; making it possible to run, for example, a genetic sequencing algorithm or an aerodynamics simulation directly from a browser.

Augmented Reality (AR) and Virtual Reality (VR)

Thanks to Wasm, AR and VR are expanding beyond gaming. From virtual shopping experiences and interactive educational platforms, to immersive art installations, WebAssembly provides the backbone needed for these applications.

Other Applications

Innovations like the WebAssembly System Interface (WASI) are pushing boundaries, making it feasible for Wasm to run server-side applications. It’s also a great way to improve the performance of web applications, especially custom ones.

Nowadays, plenty of organizations use bespoke solutions for almost every aspect of their workflow. From everyday tools like a custom DOCX editor or image format converter, to complex creations like rendering engines, WebAssembly is a universal solution.

Not only is it secure, which helps protect IP and trade secrets, but it also runs at near-native speeds, which drastically reduces time-wasting during collaborative work sessions. Likewise, there’s also a noticeable increase in resource efficiency in the backend, allowing organizations to better allocate resources, especially computing power.

Then, you also have e-learning platforms — they often incorporate interactive simulations, coding environments, and real-time feedback systems. WebAssembly can power these features, ensuring learners have a seamless and responsive experience, whether they’re experimenting with a physics simulation or practicing coding in an online IDE.

Last but not least, we can’t forget how important real-time data processing and complex mathematical calculations are in financial platforms. WebAssembly allows for the creation of high-speed trading tools, real-time analytics dashboards, and other financial applications directly in the browser, with performance being comparable to more dedicated software.

Drawbacks of WebAssembly

The use cases and benefits of Wasm, although they are plentiful, don’t exist in a vacuum. In order to fully utilize this platform, one must also consider its drawbacks. This is essential in understanding Wasm’s full array of capabilities. With that in mind, think about the following when considering whether to use WebAssembly.

1. Lack of Garbage Collection

In many modern languages — like JavaScript, Java, or Python — garbage collection is an automatic process that frees up memory no longer in use by the application. WebAssembly’s absence of a native garbage collection system creates a complex environment for memory management. This will be a shocking experience at first, especially for younger devs who are used to having this feature automated.

Developers have to explicitly allocate and deallocate memory, which greatly elevates the risk of memory-related issues like leaks or buffer overflows. Moreover, this also imposes limitations on which programming languages can be ported effectively to WebAssembly. So far, the most suitable options include:

  • Rust: It provides excellent memory safety features and has a robust ecosystem around WebAssembly. The Rust compiler can directly target WebAssembly, and there are tools like wasm-bindgen to facilitate smooth interaction between Rust and JavaScript.
  • C/C++: These languages offer fine-grained control over memory and are well-supported by WebAssembly. The Emscripten toolchain allows C/C++ code to be compiled to WebAssembly easily. Many existing codebases written in these languages have been successfully ported to WebAssembly.
  • AssemblyScript: This is a subset of TypeScript that has been tailored for WebAssembly. It provides a familiar syntax for JavaScript and TypeScript developers, while being designed to compile to WebAssembly efficiently.

2. Limited Access to Web APIs

WebAssembly operates in a sandboxed execution environment separate from the JavaScript runtime. As a result, it can’t natively perform operations like DOM manipulations, or fetch data from web APIs.

To do these, it has to go through JavaScript, which creates a bottleneck and adds latency to the operations. This situation is particularly problematic for applications that require real-time interaction with the web environment. For instance, in gaming or interactive simulations where speed and real-time updates are crucial, having to go through a JavaScript bridge can undermine the very performance benefits that WebAssembly aims to offer.

3. Debugging Difficulties

The debugging ecosystem for WebAssembly is far from mature. The tools available are not as advanced as those for other languages, and the debugging process can often be cumbersome.

If a developer is looking to debug the WebAssembly binary directly, the available options are quite limited. Setting breakpoints, watching variables, and other typical debugging activities become tricky. Source maps, which connect your WebAssembly code back to your readable source code, offer some reprieve; but they too are still not perfectly integrated into most debugging tools. This often results in longer and more frustrating development cycles, especially when Docker and Wasm work in unison.

4. Browser Compatibility

While most modern web browsers offer good support for WebAssembly, there’s a significant portion of users on older versions that don’t. This includes Opera Mini, Legacy Microsoft Edge IE 11, and QQ.

Thus, when targeting a broad user base, developers need to write fallback mechanisms — usually in JavaScript. This creates a situation where essentially two versions of the same functionality have to be maintained. This not only increases the initial development time, but also inflates the ongoing maintenance costs. Every new feature or bug fix has to be implemented twice and then tested across multiple versions, which can be resource-intensive.

5. Security Concerns

WebAssembly is designed to execute code securely within a sandboxed environment, but this doesn’t eliminate all security concerns. Since it allows for the execution of low-level code, there’s a potential for new kinds of cyberattacks, such as integer overflows or buffer overflows, which are less common in high-level languages like JavaScript.

These concerns demand rigorous security auditing and often require developers to adopt additional security measures, like data validation and input sanitization, which add more layers of complexity and can be resource-intensive.

6. Complexity

WebAssembly’s low-level architecture provides a high degree of control over the code execution, but it also demands a deep understanding of computer architecture and low-level programming constructs.

Unlike JavaScript or Python, which handle many operations automatically, WebAssembly requires you to manually manage aspects like memory allocation, stack operations, and binary arithmetic. Such intricacies make WebAssembly less approachable for developers without a background in systems programming or computer architecture.

Additionally, the current documentation and educational resources for WebAssembly are not as abundant as for more established languages, making the learning curve steeper. The learning curve is notoriously steep, and it’s an unpleasant experience for webdev newcomers.

7. File Size

Despite its compiled, binary nature, WebAssembly files are not always smaller than their JavaScript equivalents. Especially when using large libraries or frameworks, the WebAssembly file size can become a concern.

This is particularly important for users on mobile devices or slower network connections, where large files can lead to noticeably slower load times. Developers need to consider this trade-off carefully; even though WebAssembly may execute more quickly once loaded, the initial load time could adversely affect the user experience enough to offset this advantage.

8. Ecosystem Maturity

WebAssembly is a relatively new technology, and as such, its ecosystem is not as rich or mature as other programming languages. JavaScript, for example, has a plethora of frameworks, libraries, and tools developed over more than two decades.

In contrast, WebAssembly’s more limited ecosystem often requires developers to build components from scratch, leading to longer development cycles. Furthermore, while community support is growing, the number of experts and available learning resources is currently limited, making problem-solving more challenging.

9. SEO Implications

Search engines are primarily designed to index text-based content like HTML, CSS, and JavaScript. With WebAssembly’s binary format, the standard tools and practices for search engine optimization may not be directly applicable. This requires additional strategies, such as server-side rendering or using JavaScript to dynamically generate SEO-friendly metadata, to ensure that the content is accessible to search engine crawlers.

These extra steps can add a level of complexity that could make WebAssembly less attractive for publicly accessible web projects where SEO is a major concern. Likewise, if WebAssembly is implemented poorly and it slows down the website or causes compatibility issues, this could negatively impact user engagement, and result in SEO teams having to focus on additional tasks and SEO tactics, such as link building, retargeting, and increased outreach efforts. This can be exhausting for the SEO team and have a negative impact on the site’s rankings if not promptly fixed.

Potential Future Evolution of Wasm

As WebAssembly continues to mature and grow in prominence, it is poised to play a pivotal role in the next generation of web and beyond. Here’s a closer look into the potential trajectory of its evolution:

  • Expanding Beyond the Browser: Wasm’s beginnings may have been browser-centric, but its destiny appears universal with advanced projects like WASI (WebAssembly System Interface) being just the tip of the iceberg. The idea of having a universal runtime can revolutionize software distribution, allowing devs to write once and have their applications run anywhere — from browsers and servers to edge devices and even embedded systems.
  • Integration with Emerging Technologies: The dynamic nature of technology ensures that there are always new frontiers to explore. As technologies like AR, VR, Machine Learning, and IoT gain momentum, WebAssembly is positioned to be at the heart of these integrations. We could soon see more intelligent web applications, augmented reality tools, and interconnected devices leveraging the power and efficiency of Wasm.
  • Growing Tooling and Community Support: A robust technology is only as good as its ecosystem. As Wasm gains traction, the developer community surrounding it is thriving, leading to the emergence of more advanced tools, frameworks and libraries.
  • Enhanced Interoperability: While WebAssembly already boasts significant interoperability with JavaScript and other web technologies, the future may hold even more seamless integrations. As the web standards evolve, we may see WebAssembly become even more central to the core of the internet, allowing for richer, more dynamic, and interactive web experiences.

The Web’s Next Frontier

In short, WebAssembly is redefining the benchmarks of what’s possible on the web. For developers, it’s not just another tool in the arsenal, but a transformative technology that opens up avenues previously deemed unattainable.

As it continues to evolve, the lines between native and web applications might be blurred even further, leading to a truly unified and more robust internet in the future.

The post What Is WebAssembly? appeared first on The New Stack.

]]>
Can WebAssembly Get Its Act Together for a Component Model? https://thenewstack.io/can-webassembly-get-its-act-together-for-a-component-model/ Thu, 14 Sep 2023 16:33:48 +0000 https://thenewstack.io/?p=22717260

The final mile for WebAssembly remains a work in progress as the Wasm community races to finalize a common standard.

The post Can WebAssembly Get Its Act Together for a Component Model? appeared first on The New Stack.

]]>

The final mile for WebAssembly remains a work in progress as the Wasm community races to finalize a common standard. Among other things, it is in the wait of the standardization of component interface Wasi, the layer required to ensure endpoint compatibility among the different devices and servers on which Wasm applications are deployed. As progress has been made so apps written in different languages can be deployed with no configuration among numerous and varied endpoints, the survival of WebAssembly as a widely adopted tool remains at stake until such a standard is completed. However, the community is aggressively seeking to finalize the component module, which became apparent during the many talks given at the first Linux Foundation-sponsored WasmCon 2023 last week.

“WebAssembly has boasted a handful of small advantages over other runtime technologies,” Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack. “The component model is the big differentiator. It is the thing that opens up avenues for development that have simply never existed before. It’s fair to call this an existentially important moment for WebAssembly.”

Implementations for WASI-Preview 2

This roadmap released in July reflects changes occurring in standards within the WebAssembly CommunityGroup (CG) and the WASI Subgroup within the W3C. This includes the WebAssembly Core WebAssembly Component Model, WASI (WebAssembly System Interface) and a number of WASI-based interfaces.

The Component Model proposal, developed on top of the core specification, includes the WebAssembly Interface Types (WIT) IDL, while WIT is the language of high-level types that are used to describe the interfaces of a component, as Bailey Hayes, director of the Bytecode Alliance Technical Standards Committee and CTO at Cosmonic, explained in a blog post.

The Component Model adds high-level types with imported and exported interfaces, making components composable and virtualizable, Hayes said. This is important for allowing different programming languages to function in the same module because it allows for the creation of and combining components that were originally written in different programming languages, Hayes said.

The latest standards for WebAssembly (Wasm) are of significant importance as they focus the efforts of developers, community members, and adopters on tooling that supports a portable ecosystem, Liam Randall, CEO and co-founder of Cosmonic, told The New Stack. “With a focus on WebAssembly Components, they enable Components to act as the new containers, ensuring portability across various companies developing across the landscape,” Randall said. “This standardization also fosters better collaboration between language tooling that creates components from various languages and hot-swappable modules defined by WASI. What this means to developers is that we can now use code from across our language silos, creating a powerful ‘better together’ story for the WebAssembly ecosystem.”

In other words, WASI-Preview 2 is an exciting step as it addresses critical areas such as performance, security and JavaScript interactions — and one more step on the journey toward interoperability, Torsten Volk, an analyst for Enterprise Management Associates (EMA), told The New Stack. “The common component model is absolute key for accelerating the adoption of WebAssembly, as it is the precondition for users to just run any of their applications on any cloud, data center or edge location without having to change app code or configuration,” Volk said.

An API call requesting access to a GPU, a database or a machine learning model would then work independently of the specific type of the requested component, Volk said. “This means I could define how a datastream should be written to a NoSQL database and the same code function would work with MongoDB, Cassandra or Amazon DynamoDB,” Volk said.

WASI began as a POSIX-style library for WebAssembly. However, it has outgrown those roots, becoming something more akin to JavaScript’s WinterCG: a core set of interfaces to commonly used features like files, sockets, and environments, Butcher said. “WASI Preview 2 exemplifies this movement away from POSIX and toward a truly modern set of core features. Instead of re-implementing a 1970s vision of network computing, WASI is moving to a contemporary view of distributed applications.”

The component aspect plays a key role in the release of new features for Fermyon’s experimental SDK for developing Spin apps using the Python programming language.

Relating to components, Fermyon’s new componentize-py can be used to build a simple component using a mix of Python and native code, type-check it using MyPy, and run it using Spin. The user can then update the app to use the wasi-http proposal, a vendor-agnostic API for sending and receiving HTTP requests.

“Providing developers with the ability to integrate with runtime elements that are not yet completely defined by a CCM makes it less likely for them to hit a wall in their development process, and should therefore be welcomed,” Volk said.

Regarding Python, it is a “top language choice, and is vitally important for AI,” Butcher said. “Yet, to this point some of the most powerful Python libraries like NumPy have been unavailable. The reason was that these core libraries were written in C and dynamically loaded into Python,” Butcher said. “Who would have thought that the solution to this conundrum was the Component Model?”

With the new componentize-py project, Python can take its place as a top-tier WebAssembly language, Butcher noted. “Most excitingly, we are so close to being able to link across language boundaries, where Rust libraries can be used from Python or Go libraries can be used from JavaScript,” Butcher said. “Thanks to the Component Model, we’re on the cusp of true polyglot programming.”

Future Work

The work to finalize a component model that is necessary for Wasm to see wide-scale adoption remains ongoing, as an extension of the incremental steps described above, Luke Wagner, a distinguished engineer for edge cloud platform provider Fastly, told The New Stack during WasmCon 2023 last week. Wagner defines a component as a “standard portable, lightweight finely-sandbox cross-language compositional module.”

During his conference talk, Wagner described the developer preview to be released this year:

  • Preview 2: It covers both a component module and a subset of Wasi interfaces.
  • The top-line goals are stability and backward compatibility:

“We have an automatic conversion converting Preview 1 core modules to Preview 2 components and then we’re committing to future having a similar tool to convert to Preview 2 components into whatever comes next,” Wagner said during his talk.

Preview 2 features include, Wagner said:

  • A first wave of languages that include Rust, JavaScript, Python, Go and C.
  • A first wave of Wasi proposals, including filesystems socket, CLI, http and possibly others.
  • A browser/node polyfill: jco transpile.
  • Preliminary support for Wasi virtualization in the form of wasi-virt.
  • Preliminary support for component composition: in the form of Wasm-compose.
  • Experimental component registry tooling: in the form of warg.
  • “Next year it’s all about improving the concurrency story,”  Wagner said. This is because Preview 2 “does the best it can but concurrency remains warty.”

These “wart” aspects Wagner described include:

  • Async interfaces, which are going to be too complex for direct use and need manual glue code, while the general goal is to be able to use the automatic bindings directly without manual glue code.
  • Streaming performance isn’t as good as it could be.
  • Concurrency is not currently composable, which means two components doing concurrent stuff will end up blocking each other in some cases. And if you’ve virtualized one of these async interfaces, it ends up being that you have to virtualize them all.

Preview 3 will be designed to:

  • Fix these drawbacks by adding native future and stream types to Wit and components.
  • This will pave the way for ergonomic, integrated automatic bindings for many languages.
  • Offer an efficient io_uring-friendly ABI.

Composable concurrency: For example, in Preview 2, we need two interfaces for HTTP: one for outgoing requests and one for incoming ones that have different types and different signatures, Wagner said. With Preview 3, the two interfaces will be merged to just have one interface, i.e. with the Wasi handler.

This will allow for a single component that both imports and exports the same interface: It will then be possible to import a handler for outgoing requests and export a handler to receive incoming requests. Because they use the same interface, it will then be possible to take two services to chain together and just link them directly together using component linking and now executing the whole compound request is just an async function call, which can support modularity without a microservices use case.

“Our goal is by the end of this year is to complete Preview 2 milestones, which will lead to a  stable, maybe beta release,” Luke Wagner, a distinguished engineer for Fastly, told The New Stack after the WasmCon 2023 last week.

“The idea is, once we hit this, you will continue to be able to produce Preview 2 binaries and run them in Preview 2 engines so stuff stops breaking.”

The post Can WebAssembly Get Its Act Together for a Component Model? appeared first on The New Stack.

]]>
WebAssembly Reaches a Cloud Native Milestone https://thenewstack.io/webassembly-reaches-a-cloud-native-milestone/ Mon, 11 Sep 2023 14:50:44 +0000 https://thenewstack.io/?p=22717781

The CNCF WebAssembly Landscape Report published last week offered an overview of the status of WebAssembly (Wasm) as a technology

The post WebAssembly Reaches a Cloud Native Milestone appeared first on The New Stack.

]]>

The CNCF WebAssembly Landscape Report published last week offered an overview of the status of WebAssembly (Wasm) as a technology and its adoption at this time. As WebAssembly’s growth and adoption continue, the report provides a good summary of the WebAssembly players, tools, usage and how it works, as well as its overlap with cloud native environments.

The report also underscores an unofficial turning point or milestone for WebAssembly, as measured in its adoption alone as the initial Wasm landscape revealed in the report has rapidly exploded from its use in the web browser to now represent 11 categories and 120 projects or products, worth an estimated $59.4 billion.

It will be a long road before WebAssembly sees its full potential. But in theory, Wasm is designed as a way to deploy code in a secured sandbox anywhere, on any device running a CPU instruction set in any language, simultaneously through a single module. The technology is not there yet, of course, but a number of developments were discussed and demonstrated at WasmCon 2023 last week — which represents an additional milestone as the first Linux Foundation Wasm stand-alone event beyond the umbrella of KubeCon + CloudNativeCon.

In many ways, the Wasm landscape is similar to the early days of Kubernetes’ then burgeoning development and adoption a few years ago. While discussing the report and WebAssembly’s status in the cloud native landscape and in general during a WasmCon keynote, CNCF CTO Chris Anisczcyk said he sees Wasm in the early cloud native and container days.

“Remember back in the day there was a lot of innovation happening in container and cloud native space: there were multiple runtimes, multiple specs, everyone kind of fighting for mindshare,” Anisczcyk said. “I feel like something similar is happening in the Wasm state and that’s kind of where we currently are…A lot of the adoption and innovation are happening among the early adopters and will naturally progress.”

While Anisczcyk insisted that Wasm is still in its early stages of development and “a lot of the early stuff is still brewing,” he noted how the CNCF has been an early adopter of the technology. “A lot of our projects have used WebAssembly.”

Indeed, Wasm is expected to play a large role as an ultralight way to deploy sandboxed applications to endpoints in cloud native environments. Wasm, of course, has its niche usages, beyond the container sphere, of course as well. “WebAssembly complements and piggybacks on the existing Kubernetes ecosystem, opening up many new opportunities,” Daniel Lopez Ridruejo, founder and former CEO of Bitnami (now part of VMware), told The New Stack. “WebAssembly can run on microcontrollers and IoT devices in a way that Kubernetes never could, as there are many devices where you cannot even use a container. So, the momentum is building with many different industry players coming together to build a platform for it.”

Among application frameworks alone, the CNCF covers Spin, WasmCloud (CNCF sandbox), SpiderLightning, WasmEdge plug-ins, Dapr SDK for WasmEdge, Homestar, Ambient, WASIX, Extism, Timecraft, vscode-wasm, and WasmEx.

The CNCF’s coverage now extends to many more areas, for runtimes, plugins and other uses with and for AI, edge devices, web and mobile deployments and a number of other applications:

The State of Wasm 2023 report was also released at WasmCon. The survey of 255 WebAssembly users was conducted by SlashData in collaboration with the CNCF. Key findings included:

  • While Wasm is still primarily used to develop web applications (58%), its use is expanding beyond this original use case into new areas like data visualization (35%), Internet of Things (32%), artificial intelligence (30%), backend services (excluding serverless) (27%), and edge computing (25%).
  • The most significant benefits attracting developers to Wasm are faster loading times (23%), opportunities to explore new use cases and technologies (22%), and sharing code between projects (20%).
  • The top challenges faced by Wasm users were difficulties with debugging and troubleshooting (19%), as well as different performance and a lack of consistent developer experience between runtimes (both at 15%). At the same time, 17% of respondents did not face any challenges.

The post WebAssembly Reaches a Cloud Native Milestone appeared first on The New Stack.

]]>
Is It too Early to Leverage AI for WebAssembly? https://thenewstack.io/is-it-too-early-to-leverage-ai-for-webassembly/ Wed, 06 Sep 2023 17:28:24 +0000 https://thenewstack.io/?p=22717258

AI and its application to IT, software development, and operations are just beginning to take hold, portending profound implications and

The post Is It too Early to Leverage AI for WebAssembly? appeared first on The New Stack.

]]>

AI and its application to IT, software development, and operations are just beginning to take hold, portending profound implications and disruptions for how humans’ roles will evolve, especially in the near and long term.

On a smaller scale, WebAssembly represents a technology that is generating significant hype while demonstrating its viability. However, a successful business model adoption has yet to be realized, mainly due to a lack of standardization for the final endpoint. Meanwhile, at least one vendor, Fermyon, believes that applying AI to WebAssembly is not premature at this stage.

So, how can AI potentially help Wasm’s development and adoption and is that too early to determine?  As Angel M De Miguel Meana, a staff engineer at VMware’s Office of the CTO, noted how during the last year, since the introduction of ChatGPT brought AI to the forefront of software development, the AI ecosystem has evolved drastically. Meanwhile, “WebAssembly provides a solid base to run inference not only on the server, but in many different environments like browsers and IoT devices,” De Miguel Meana said. “By moving these workloads to end-user devices, it removes the latency and avoids sending data to a centralized server, while being able to work on the type of heterogeneous devices often found at the edge… Since the Wasm ecosystem is still emerging, integrating AI in early stages will help to push new and existing AI related standards. It is a symbiotic relationship.”

Perfect Pairing

“We started Fermyon with the goal of building a next-wave serverless platform. AI is very clearly part of this next wave. In our industry, we frequently see revolutionary technologies grow up together: Java and the web, cloud and microservices, Docker and Kubernetes,” Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack. “WebAssembly and AI are such a perfect pairing. I see them growing up (and growing old) together.”

“Baking” AI models, such as LLMs [large language models] or transformers, into the WebAssembly runtime, is the logical next step to accelerate the adoption of WebAssembly, Torsten Volk, an analyst for Enterprise Management Associates (EMA), told The New Stack. Similar to calling, e.g. a database service via API, compiled WebAssembly apps (binaries) could then send their API request to the WebAssembly runtime that in turn would relay this call to the AI model and pipe the model-response back to the originator, Volk said.

“These API requests will become very powerful once we have a common component model (CCM) that provides developers with one standardized API that they can use to access databases, AI models, GPUs, messaging, authentication, etc. The CCM would then let developers write the same code to talk to an AI model (e.g. GPT or Llama) on any kind of server in the data center, cloud or even at edge locations, as long as this server has sufficient hardware resources available,” Volk said. “This all boils down to the key question of when industry players will agree on a CCM. In the meantime, WebAssembly clouds such as Fermyon can leverage WebAssembly to make AI models portable and scalable within their own cloud infrastructure where they do not need a CCM and pass on some of the savings to the customer.”

Solving the Problem

Meanwhile, Fermyon believes that applying AI to WebAssembly is not premature at this stage. As Butcher noted,  developers tasked with building and running enterprise AI apps on LLMs like LLaMA2 face a 100x compute expense for access to GPUs at $32/instance-hour and upwards. Alternatively, they can use on-demand services but then experience abysmal startup times. This makes it impractical to deliver enterprise-based AI apps affordably.

Fermyon Serverless AI has solved this problem by offering sub-second cold start times over 100x faster than other on-demand AI infrastructure services, Butcher said. This “breakthrough” is made possible because of serverless WebAssembly technology powering Fermyon Cloud, which is architected for sub-millisecond cold starts and high-volume time-slicing of compute instances which has proven to alter compute densities by a factor of 30x, he said. Extending this runtime profile to GPUs makes Fermyon Cloud the fastest AI inferencing infrastructure service, Butcher said.

Such an inference service is “very interesting” as the typical WebAssembly app consists of only a few megabytes, while AI models are a lot larger than that, Volk said. This means they would not be able to start up quite as fast as traditional WebAssembly apps. ”I assume that Fermyon has figured out how to use time slicing for providing GPU access to WebAssembly apps so that all of these apps can get the GPU resources they need by reserving a few of these time slices via their WebAssembly runtime,” Volk said. “This would mean that a very large number of apps could share a small number of expensive GPUs to serve their users on-demand. This is a little bit like a time-share, but without being forced to come to the lunchtime presentation.”

Getting started using Spin.

So, how would the user interact with Serverless AI? With Fermyon’s Serverless AI, there are no REST APIs or external services — it’s just built locally to Fermyon’s Spin and also in Fermyon Cloud, Butcher explained. “Anywhere in your code, you can simply pass a prompt into Serverless AI and get back a response. In this first beta, we’re including LLaMa2’s chat model and the recently announced Code Llama code-generating model,” Butcher said. “So, whether you’re summarizing text, implementing your own chatbot, or writing a backend code generator, Serverless AI has you covered. Our goal is to make AI so easy that developers can right away begin leveraging it to build a new and jaw-dropping class of serverless apps.”

Big Implications

Using WebAssembly to run workloads, it is possible to use Fermyon Serverless AI  to assign a “fraction of a GPU” to a user application “just in time” to execute an AI operation, Fermyon CTO and co-founder Radu Matei wrote in a blog post. “When the operation is complete, we assign that fraction of the GPU to another application from the queue,” Matei wrote. “And because the startup time in Fermyon Cloud is milliseconds, that’s how fast we can switch between user applications that are assigned to a GPU. If all GPU fractions are busy crunching data, we queue the incoming application until the next one is available.”

This has two big implications, Matei wrote. First, users don’t have to wait for a virtual machine or container to start and for a GPU to be attached to it. Also, “we can achieve significantly higher resource utilization and efficiency for our infrastructure,” Matei wrote.

Specific features Serverless AI offers that Fermyon communicated include:

  • This is a developer tool and hosted service for enterprises building serverless applications that include AI inferencing using open source LLMs.
  • Thanks to our core WebAssembly technology, our cold startup times are 100x faster than competing offerings, cutting down from minutes to under a second. This allows us to execute hundreds of applications in the same amount of time (and with the same hardware) that today’s services use to run one.
  • We provide a local development experience for building and running AI apps with Spin and then deploying them into Fermyon Cloud for high performance at a fraction of the cost of other solutions.
  • Fermyon Cloud uses AI-grade GPUs to process each request. Because of our fast startups and efficient time-sharing, we can share a single GPU across hundreds of apps.
  • We’re launching the free tier private beta.

Big Hopes

However, there is certainly a way to go before Wasm and AI concurrently reach their potential. During WasmCon 2023,  Michael Yuan CEO and co-founder of Second State, a runtime project for Wasm, and WasmEdge discussed some of the work in progress. He covered the topic with De Miguel Meana, during their talk “Getting Started with AI and WebAssembly” at WasmCon 2023.

“There’s a lot of ecosystem work that needs to be done in this space [of AI and Wasm]. For instance, having inferences alone is not sufficient,” Yuan said. “The million-dollar question right now is, when you have an image and a piece of text, how do you convert that into a series of numbers, and then after the inference, how do you convert those numbers back into a usable format?” 

Preprocessing and post-processing are among Python’s greatest strengths today, thanks to the availability of numerous libraries for these tasks, Yuan said. Incorporating these preprocessing and post-processing functions into Rust functions would be beneficial, but it requires more effort from the community to support additional modules. “There is a lot of potential for growth in this ecosystem,” Yuan said.

 

The post Is It too Early to Leverage AI for WebAssembly? appeared first on The New Stack.

]]>
Rust and C++ Work Better for WebAssembly https://thenewstack.io/rust-and-c-work-better-for-webassembly/ Wed, 09 Aug 2023 13:50:01 +0000 https://thenewstack.io/?p=22714584

Before we cover what happens before WebAssembly can seamlessly support most, if not all, of the principal languages in use

The post Rust and C++ Work Better for WebAssembly appeared first on The New Stack.

]]>

Before we cover what happens before WebAssembly can seamlessly support most, if not all, of the principal languages in use today, the status quo needs to be stated: we are still a ways away before the developer writes their code with zero extra configuration necessary for Wasm modules in order to deploy an application simultaneously across a number of different environments, ranging from cloud deployments to IoT devices to on-premises servers.

Different applications written with different programming languages should be able to function within a single module. Essentially, a microservices-packed module should be able to be used to deploy multiple services across multiple disparate environments and offer application updates without reconfiguring the endpoints. In theory, it is just a matter of configuring the application in the module so that each environment in which the module is deployed does not have to be reconfigured separately once the work is done inside the module.

WebAssembly only supports Rust and Go among the RedMonk top-20 most popular languages for WebAssembly apps running in the browser, the CPU core, and on Fermyon’s Spin SDK. A component model that can accommodate even Rust and Go, much less all of the languages on a single Wasi target, has yet to be finalized.

So what is the holdup?

It largely comes down to that last WebAssembly System Interface (WASI) layer and each language’s interaction with it.

Presently, Fermyon Spin can run applications in any language that compiles to WebAssembly with WASI support. However, languages with the Spin SDK support advanced capabilities like fully integrated key/value storage, NoOps SQL Database and a built-in HTTP client, Matt Butcher, co-founder and CEO of Fermyon Technologies, “Right now, Rust, Go, JavaScript/Typescript, and Python have a Spin SDK,” Butcher said. “When the component model is released later this year, every language that supports the component model will gain access to the Spin SDK.”

To ensure different programming languages can work together within WebAssembly modules, they all need to create a translation layer. This layer helps convert their unique system calls into a format that WASI can understand, Torsten Volk, an analyst for Enterprise Management Associates (EMA), said.

“System calls are the requests a program makes for resources like storage network, or computing power. While this might sound straightforward, it’s actually quite complex,” Volk said. Each programming language has its own set of system calls, and these can be fundamentally different from one another. They need to be carefully adapted so that WASI can understand them, Volk said.

As Volk explained, several things can go wrong when a programming language has a system call designed to access the file system:

If WASI doesn’t directly support this system call, the translation layer has to figure out how to map it to a WASI-compatible call. This requires a deep understanding of both the programming language’s system calls and the WASI API, making it a complex task.

Some system calls might ask for more system access than WASI can provide. These calls might not be feasible to translate directly and could require complex workarounds.

The process of translating system calls takes time and uses infrastructure resources. This could make it challenging to maintain consistent performance. In essence, while WASI provides a way for different languages to interoperate within WebAssembly, the process of adapting each language’s system calls to be WASI-compatible can be quite complex and resource-intensive, Volk said.

The recently released roadmap for WebAssembly by the WebAssembly Community Group (CG) and the WASI Subgroup within the W3C for WebAssembly Core, WebAssembly Component Model, WASI and other WASI-based interfaces did cover language support. The Component Model proposal, developed on top of the core specification, includes the WebAssembly Interface Types (WIT) IDL, while WIT is the language of high-level types that are used to describe the interfaces of a component (the Component Model adds high-level types with imported and exported interfaces, making components composable and virtualizable, as Bailey Hayes, director of the Bytecode Alliance Technical Standards Committee and a director at Cosmonic, wrote in a blog post. This is important for allowing different programming languages to function in the same module because it allows for the creation and combining of components that were originally written in different programming languages, Hayes wrote.

“This standardization also fosters better collaboration between language tooling that creates components from various languages and hot-swappable modules defined by the WebAssembly System Interface (WASI),” Liam Randall, CEO and co-founder of Cosmonic, told The New Stack.” What this means to developers is that we can now use code from across our language silos, creating a powerful ‘better together’ story for the WebAssembly ecosystem.”

Rust Never Sleeps

Wasm will eventually need to be able to adequately run the most popular languages for beyond-browser applications. The languages that Wasm can at least run in the browser thus far include JavaScript, Python, Rust, Go, .NET, C++, Java and PHP. But Wasm development needs to go a long way in order to become viable for all of them, as mentioned above.

“To this point, JavaScript-in-WebAssembly implementations have been based on the QuickJS engine, which is not a broadly used JavaScript runtime. Fermyon is pivoting to SpiderMonkey, the robust and battle-tested JavaScript engine that powers Firefox,” Butcher said. “This change will supercharge both JavaScript and TypeScript for WebAssembly. We’ll see a complete feature set, blazing fast performance, and compatibility with many existing JavaScript packages.”

This reports on the top 20 languages from RedMonk’s ranking. Some languages, like CSS, PowerShell, and “Shell”, don’t really have a meaningful expression in Wasm. However, we have left them here for completeness. Core means there is an implementation of WebAssembly 1.0 Browser means there is at least one browser implementation WASI means the language supports at least Preview 1 of the WASI proposal Spin SDK indicates there is a Spin SDK for the language Anything with WASI or Spin SDK support runs on Fermyon Cloud, Spin, and Fermyon Platform. Source: Fermyon.

Meanwhile, the most compatible languages for WebAssembly applications in production are Rust and C++ or C. That said, of the top 20 languages in analyst firm RedMonk’s language ranking, 16 have at least basic browser support.

“It is no coincidence that statically typed languages that do not require garbage collectors, like Rust and C/C++, were among the first languages to add WebAssembly as a target,” Hayes said. “Adding a new bytecode target for a virtual stack machine is more straightforward with systems-level languages that do not need to compile in an interpreter or garbage collector. “

But what makes Rust different from the rest is that it enjoyed a co-evolution with WebAssembly, Hayes said. Many of the Mozilla engineers working on WebAssembly within the SpiderMonkey engine were also part of the growth and development of the Rust language, Hayes said.

The primary goal of WebAssembly is to produce a bytecode that is extremely compact allowing for tiny and load time efficient binaries that are also memory-safe and sandboxed, Hayes told The New Stack. Rust has zero-cost abstractions and is attractive to engineers invested in creating memory-safe programs. “In many ways, WebAssembly and Rust appeal to each other’s goals and key strengths,” Hayes said.

Rust is downright hard to learn for many if not most developers or anyone who just wants to code. Yet, Hayes noted how Rust is consistently ranked the most loved language by developers.

“Rust was not on my radar for languages to check out until it reached its first stable release in 2015,” Hayes told The New Stack. “The interesting thing about 2015 is that it also happens to be the year WebAssembly design work began.”

C was also the original language targeted for WebAssembly support, and the group working on the specification paid close attention to the C/C++ ecosystem, Butcher noted. “Rust, like WebAssembly, was originally started at Mozilla and grew up alongside WebAssembly,” Butcher said. “Some of WebAssembly’s most active developers also worked on key pieces of the Rust ecosystem. So the technologies co-evolved.”

Python Rocks

Once a component standard is finalized for Wasi for the backend, Python support will be especially welcome. This is because Python’s big appeal lies in the massive number of easy-to-use libraries that often get developers to the 80% or even 90% without having to write a significant amount of code, Volk told The New Stack. “Assume I want to take this article and store it in a NoSQL database, such as MongoDB or Cassandra. First, we could use the Selenium library to find and retrieve the article on the TNS website and then we might leverage Pandas for identifying and storing the individual components of the article, e.g. headings, images, text, links, etc. in a handy data frame,” Volk said. “Finally, we could use the Cassandra-driver library or PyMongo to connect, authenticate, and write the article to our NoSQL database of choice. If we could get all of this to work on Wasm, we could share and sell our new app with anyone and without having to worry about the target cloud.”

At Fermyon, Python support “is our focal point,” Butcher said. “We recently released an updated Python Spin SDK, and are already plowing ahead on a new version,” Butcher said. “We won’t slow momentum until we can support the common libraries and workloads used in modern Python AI and data processing.”

This challenge is very similar to the early days of Kubernetes when saving app state, storing app data, ensuring app performance and interacting with external systems “was still tricky,” Volk said. “Now we need to simply figure out the current limits of this approach within real-life projects and continuously push the boundaries, just like we did for Kubernetes,” Volk said. “But as a reward, we could ultimately store the entire application, including its server and container runtime, inside of a container registry for turnkey deployment.”

The post Rust and C++ Work Better for WebAssembly appeared first on The New Stack.

]]>
Where Does WebAssembly Fit in the Cloud Native World? https://thenewstack.io/where-does-webassembly-fit-in-the-cloud-native-world/ Thu, 03 Aug 2023 16:50:31 +0000 https://thenewstack.io/?p=22713792

This past January, Matt Butcher, co-founder and CEO of wrote an article about the future of WebAssembly for The New

The post Where Does WebAssembly Fit in the Cloud Native World? appeared first on The New Stack.

]]>

This past January, Matt Butcher, co-founder and CEO of Fermyon Technologies, wrote an article about the future of WebAssembly for The New Stack in which he made a bold statement: “2023 will be the year that the component model begins redefining how we write software.”

In this episode of The New Stack Makers podcast, Butcher acknowledged that that’s a “grandiose claim.” But, as he told Makers host Heather Joslyn, the component model is likely to help WebAssembly more quickly integrate into the cloud native landscape.

An advantage of WebAssembly, or Wasm —  a binary instruction format for a stack-based virtual machine, designed to execute binary code on the web — is that it allows developers to write code in their preferred language and run it anywhere.

“When you think about the way that programming languages have evolved over time, every five to seven years, we see a new superstar programming language,” Butcher said. “And we’ve watched this pattern repeat: the language drops, and then it takes a couple of years, as everybody has to build up the same set of libraries.”

The component model, he said, is positioned to help eliminate this problem by providing “a common way for WebAssembly libraries to say, these are the things I need. These are my imports. And these are the things that I provide — these are my exports. And then we can compile that WebAssembly module, and it can express its needs and we can take another WebAssembly module and we can start joining them up.”

The Bytecode Alliance is in the midst of defining standards for the component model. The model holds enormous promise, Butcher said. Now, he said, if a new language shows up,  “If it can compile the WebAssembly and use the component model, it can pull in libraries from other things. It reduces the barriers there. It means that the maintenance of existing libraries begins to shrink.”

And that,” he added, “really is a big game changer.”

This conversation was sponsored by Fermyon Technologies.

No ‘Kubernetes Killer’

Notably, Butcher said, WebAssembly could help deliver — finally — on the promise of serverless.

Serverless was supposed to offer two key benefits, he said. One, that “you’re only running your software when you’re handling requests.” And the second, to free developers from the need to run a server and allow them to dive right into programming core business logic.

The problem, he added, is that serverless was built on what he called  “yesterday’s technology,” first virtual machines and then containers, which were built for long-running processes and aren’t cloud-agnostic.

“A virtual machine may take several minutes to start up a container takes a couple dozen seconds to start up. And if you’re really trying to handle requests that are coming in, and you know, process the request and return a response as fast as possible, you’re stuck with a kind of design Catch-22. Your core platform can’t do it that fast.”

By contrast, WebAssembly has a rapid startup time and solves other problems for developers, Butcher said. When he and his team began querying developers about their experiences with the cloud native ecosystem, they heard enthusiasm from devs about serverless.

Developers, Butcher said, told them ”If I could just find a platform that didn’t have this low startup time and had a better developer experience, was cheaper to operate, was cross-platform and cross-architecture, that would make me so happy.

“So it’s kind of like having people define a product for you and say, ‘Here’s my wish list of things. Can you build me one of these?’ That’s why I think serverless is in this position right now, where we’re gonna see a big resurgence of it.”

While Butcher acknowledged that he once believed that WebAssembly may be a “Kubernetes killer,” he now said he thinks the two are uncomparable apples and oranges. And that they can, in fact, be compatible.

“The fact that the Kubernetes ecosystem is so engaged in making sure that WebAssembly is supported alongside containers is a good indication that on the orchestrator layer, people are paying attention,” he said. “We’re making wise choices, and we’re making sure that we’re not orphaning an entire technology merely because something new and shiny came along.”

Check out the full episode for more on new developments in WebAssembly and how Wasm is poised to play a central role in the cloud native ecosystem.

The post Where Does WebAssembly Fit in the Cloud Native World? appeared first on The New Stack.

]]>
What’s Holding up WebAssembly’s Adoption? https://thenewstack.io/whats-holding-up-webassemblys-adoption/ Wed, 12 Jul 2023 11:00:51 +0000 https://thenewstack.io/?p=22712447

The promise for WebAssembly is this: Putting applications in WebAssembly (Wasm) modules can improve their runtime performance and lower latency

The post What’s Holding up WebAssembly’s Adoption? appeared first on The New Stack.

]]>

The promise for WebAssembly is this: Putting applications in WebAssembly (Wasm) modules can improve their runtime performance and lower latency speeds, while improving compatibility across the board.

WebAssembly requires only a CPU instruction set. This means that a single deployment of an application in a WebAssembly module theoretically should be able to run and be updated on a multitude of different disparate devices whether that might be for servers, edge devices, multiclouds, serverless environments, etc.

In this way, WebAssembly is already being widely deployed to improve application performance when running on the browser or on the backend. However, the full realization of WebAssembly’s potential has yet to be achieved.

While the WebAssembly core specification has become the standard, server-side Wasm remains a work in progress. The server-side Wasm layer helps to ensure endpoint compatibility among the different devices and servers on which Wasm applications are deployed. Without a standardization mechanism for server-side WebAssembly, exports and imports will be required to be built for each language so that each runtime will understand exports/imports differently, and so on.

As of today, “Wasm components” is the component model but there are other verities being worked upon; “Wasi” is an approach that configures WASM for specific hardware. “wasi-libc” is the “posixlike kernel” group or “world”; “wasi-cloud-core” is a proposal for a serverless “world.” As such, the day when developers can create applications in the language of their choice for distribution across any environment simultaneously, whether it’s on Kubernetes clusters, servers, edge devices, etc. has yet to come.

Indeed, telling the WebAssembly story beyond the browser has taken a considerable amount of fundamental work,” Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack. “Some of this is just pure engineering: We’ve had to build the tooling. Some of it, though, has been good old-fashioned product management,” Butcher said. “That means identifying the things that frustrate the user, and then solving them,” We are on the very cusp of seeing these two threads converge, as the practical output of product management intersects with the engineering work behind the component model.”

Wasm’s value proposition can be summed up by “supersonic” performance, reduced cost of operations and platform neutrality, but the component model remains the sticking point, Butcher said. “Performance was the easy one, and I think we can already check it off the list. At Fermyon, we’re seeing total cost of ownership plummet before our eyes as thousands of users sign up for our cloud,” Butcher said. “But platform neutrality — at the level we care about — requires the component model. On that front, tomorrow can’t come soon enough.”

WebAssembly is designed to run applications written in a number of languages it can host in a module. It now accommodates Python, JavaScript, C++, Rust and others. Different applications written with different programming languages should be able to function within a single module, although again, this capability largely remains under development.

“Making programming languages truly interchangeable at the system level might be the final frontier on the way toward achieving the code-once, deploy-anywhere paradigm. But for this to work out, we need a common standard to integrate different languages with their specific feature sets and design paradigms,” Torsten Volk, an analyst for Enterprise Management Associates (EMA), said.

“This is a classic collective action problem where individual for-profit organizations have to collaborate for all of them to collectively achieve the ultimate goal of language interoperability. Additionally, they need to agree on pragmatic compromises when it comes to standardizing and fleshing out feature sets across languages.”

Have a Huddle

Meanwhile, engineers from numerous companies and universities are working on the component model, Wasi proposals and language toolchains under the auspices of a binary instruction format, with the goal of putting the specifications into the World Wide Web Consortium (W3C), Ralph Squillace, a principal program manager for Microsoft, Azure Core Upstream, said.

The engineers are actively contributing to the common pool of knowledge by contributing or maintaining open source projects, taking part in efforts such as the ByteCode Alliance or sharing their knowledge and experiences at conferences, such as during the KubeCon + CloudNativeCon Europe’s co-located event Cloud Native Wasm Day.

“As always when it comes to standards, all major parties involved need to be able to tell their stakeholders why it makes sense to spend valuable developer hours on this endeavor. This becomes especially tricky when different parties follow different incentive structures, e.g. cloud service providers are interested in customers spending as much money as possible on their services without getting sufficiently frustrated to move to another cloud,” Volk said. “This means that some level of lock-in is desired, while enterprise software vendors need to focus on a high degree of customizability and portability to open up their products to the largest possible audience. All this combined shows the high level of difficulty involved in bringing interoperability for Wasm over the finish line. I hope that we will because the payoff should definitely be worth it.”

A number of tools members offering PaaS offerings to distribute applications with Wasm continue to proliferate in wait for Wasm’s expected coming heyday. Entrants include Fermyon and Cosmonic. The newer player Dylibso is developing tailored solutions for observability; these solutions include Modsurfer, used to analyze the complexity and potential risks associated with running specific code in your environment.

Meanwhile, most large software companies are actively contributing to Wasm without necessarily creating a formal department to support Wasm-related open source projects, development, integrations with infrastructure and network topologies or to develop applications for Wasm, tech leaders are almost invariably working with Wasm in production or as sandbox projects.

To facilitate the incorporation of WebAssembly (Wasm) and bridge any existing gaps, VMware‘s Wasm Labs launched the Wasm Language Runtimes project. The primary goal is to be ready to run language runtimes, libraries and components, for developers interested in embracing WebAssembly, according to Daniel Lopez Ridruejo, a senior director at VMware and CEO of Bitnami/.

These language runtimes can be utilized in conjunction with various other initiatives, including mod_wasm (for running conventional web applications like WordPress) and Wasm Workers Server (for executing edge/serverless apps). Ridruejo also mentioned the compatibility of the Language Runtime project with open-source endeavors such as Fermyon’s Spin.

Others, such as Chronosphere and Microsoft, have already begun to use WebAssembly to support their operations mostly, while continuing to actively contribute to the development of Wasm for the community. In Microsoft’s case, its work with WebAssembly dates back years. Microsoft Flight Simulator for some years now has used WebAssembly for mod protection, for example, when it was shown to improve both security and portability for add-ons distributed as WebAssembly modules. Excel online uses WebAssembly for calculating Lambda functions.

Most of Microsoft’s work now consists of investing in the upcoming component model, Microsoft’s Squillace said. For example, Microsoft is expanding the Azure Kubernetes Service WASI NodePool preview and giving its services additional hypervisor protection per request on top of the Wasm sandbox with the Hyperlight project, Squillace said. “This serves very small bare-metal micro-vms very fast for use with wasm functions,” Squillace said.

Outside of the edge browser, Microsoft is investing mainly in server-based Wasm, the system interface (wasi) and the Wasm component ecosystem surrounding the Bytecode Alliance Foundation, as well as in infrastructure and language tooling to enable productive use, Squillace said. “That means open source investments like the CNCF’s Containerd runwasi shim for Kubernetes integration, but also TinyGo-compatible Wasm component tooling, VSCode extensions and serverless proposals like wasi-cloud-core,” Squillace said. “It also means Azure investments in security like hyperlight and Azure services like AKS WASI NodePool Preview and AKS Edge Essentials, among others.”

Big Hype

WebAssembly trajectory reflects similar cycles that happen with technologies, such as Java, containers, etc., Ridruejo said. “Each one of them have seen an ecosystem grow around it with new ways of doing monitoring, security etc. It is too early to know yet what that looks like,” Ridruejo said. “The question is whether that change will be incremental and existing vendors like, say, Datadog for monitoring will add Wasm support as a new feature or it will be distributive and new companies will take Datadog’s place (again just an example) and become the ‘Datadog of Wasm.’”

The million-dollar question is what needs to happen before tool providers and large enterprises can begin using WebAssembly to make money. To that, Squillace said:

“Customers already tell us they need a comprehensible (if not great) developer experience and a deployment and management experience that is solid. They also need networking support (coming in Preview 2); no networking means no service hosts in IoT without runtime support, for example. And finally, they need coherent interactive debugging. That last one is going to be hard across all languages and runtimes.”

The post What’s Holding up WebAssembly’s Adoption? appeared first on The New Stack.

]]>
Dylibso ModSurfer Brings SCADA Controls to WebAssembly https://thenewstack.io/dylibso-modsurfer-brings-scada-controls-to-webassembly/ Tue, 11 Jul 2023 12:00:16 +0000 https://thenewstack.io/?p=22709772

“Our goal is to provide Webassembly with a level of observability similar to what you are accustomed to with tools

The post Dylibso ModSurfer Brings SCADA Controls to WebAssembly appeared first on The New Stack.

]]>

“Our goal is to provide Webassembly with a level of observability similar to what you are accustomed to with tools like DataDog or New Relic,” said Steve Manuel, CEO and co-founder of Dylibso, in an interview with TNS. “However, traditional SCADA (Supervisory Control and Data Acquisition) systems are not compatible with the unique characteristics of WebAssembly. The isolation mechanisms employed by WebAssembly effectively prevent external entities from inspecting and monitoring its internal operations.”

In March, the company released ModSurfer, a system-of-record and diagnostics application to search, browse, validate, audit and investigate WebAssembly binaries.

ModSurfer's desktop interface.

There are two main components to Modsurfer: the desktop application and is a closed source. Unlike open source software, Modsurfer is a proprietary application that can be downloaded as a complete package. The application comes with a comprehensive list of Wasm (WebAssembly) modules, allowing you to gain a deeper understanding of their functionalities and potential risks.

With Modsurfer, you have the ability to analyze the complexity and potential risks associated with running specific code in your environment. You can assess factors such as CPU usage and determine if the code poses any unexpected or malicious behaviors. This analysis helps you evaluate the safety and reliability of the modules you’re working with.

Furthermore, Modsurfer provides a search function that allows you to explore and filter through your imported modules. For example, you can search for modules that import Wasm or modules that interact with certain features. By clicking on individual modules, you can access additional details and examine their properties and dependencies.

In addition, Modsurfer enables you to set up error message notifications. If a particular term or condition, such as “CX,” is mentioned or triggered, you can configure the application to generate an error message. This feature helps you track and identify specific issues or concerns within your modules.

“Overall, Modsurfer offers a comprehensive environment for exploring, analyzing, and managing Wasm modules,” Manuel said. “It assists in evaluating their risks, searching for specific functionalities, and implementing error message notifications for targeted conditions.”

ModSurfer CLI control.

ModSurfer CLI control.

SCADA refers to a set of controls that observe and record the actions of complex systems.

Dylibso is “developing tailored solutions that work harmoniously with WebAssembly, enabling you to gain insight and observability into your production environment,” Manuel told The New Stack.

In March, the company secured $6.6 million in seed funding to build that ready WebAssembly for enterprise production. The company has already enjoyed wide adoption of Extism, a “universal plug-in system,” (still in beta) that allows WebAsssembly modules to be embedded in other applications.

“These tools will help you understand the behavior of your WebAssembly code, ensure adherence to policies, and maintain the necessary level of compliance,” Manuel said. “Our focus is on bridging the gap between traditional observability practices and the distinctive isolation properties of WebAssembly.”

TNS Staff assisted in completing this post.

The post Dylibso ModSurfer Brings SCADA Controls to WebAssembly appeared first on The New Stack.

]]>
WebAssembly and Go: A Guide to Getting Started (Part 1) https://thenewstack.io/webassembly-and-go-a-guide-to-getting-started-part-1/ Mon, 12 Jun 2023 12:00:36 +0000 https://thenewstack.io/?p=22709669

WebAssembly (Wasm) and Go are a powerful combination for building efficient and high-performance web applications. WebAssembly is a portable and

The post WebAssembly and Go: A Guide to Getting Started (Part 1) appeared first on The New Stack.

]]>

WebAssembly (Wasm) and Go are a powerful combination for building efficient and high-performance web applications. WebAssembly is a portable and efficient binary instruction format designed for web browsers, while Go is a programming language known for its simplicity, speed and concurrency features.

In this article, we will explore how WebAssembly and Go can work together to create web applications that leverage the benefits of both technologies. We will demonstrate the steps involved in compiling Go code into Wasm format, loading the resulting WebAssembly module into the browser, and enabling bidirectional communication between Go and JavaScript.

Using Go for WebAssembly offers several advantages. First, Go provides a familiar and straightforward programming environment for web developers, making it easy to transition from traditional Go development to web development.

Secondly, Go’s performance and concurrency features are well-suited for building efficient web applications that can handle heavy workloads.

Finally, the combination of Go and WebAssembly allows for cross-platform compatibility, enabling the deployment of applications on various browsers without the need for plugins or additional dependencies.

We will dive into the technical details of compiling Go code to Wasm, loading the module in a web browser, and establishing seamless communication between Go and JavaScript for WebAssembly.

You’ll come away with a comprehensive understanding of how Wasm and Go can be leveraged together to create efficient, cross-platform web applications. Whether you are a Go developer looking to explore web development or a web developer seeking high-performance options, this article will equip you with the knowledge and tools to get started with WebAssembly and Go.

Go and Its Use Cases

Go is often used for server-side development, network programming and distributed systems, but it can also be used for client-side web development.

Web development. Go is a popular choice for web development due to its simplicity, speed and efficient memory usage. It is well-suited for building backend web servers, APIs and microservices. Go’s standard library includes many built-in packages that make web development easy and efficient. Some popular web frameworks built in Go include Gin, Echo and Revel.

System programming. Go was designed with system programming in mind. It has a low-level feel and provides access to system-level features such as memory management, network programming and low-level file operations. This makes it ideal for building system-level applications such as operating systems, device drivers and network tools.

DevOps tools. Go’s simplicity and efficiency make it well-suited for building DevOps tools such as build systems, deployment tools, and monitoring software. Many popular DevOps tools are built in Go, such as Docker, Kubernetes, and Terraform.

Machine learning. Although not as popular as other programming languages for machine learning, Go’s performance and concurrency features make it a good choice for building machine learning models. It has a growing ecosystem of machine learning libraries and frameworks such as Gorgonia and Tensorflow.

Command-line tools. Go’s simplicity and fast compilation time makes it an ideal choice for building command-line tools. Go’s standard library includes many built-in packages for working with the command-line interface, such as the “flag” package for parsing command-line arguments and the “os/exec” package for executing external commands.

Key Benefits of Using WebAssembly with Go

Performance. WebAssembly is designed to be fast and efficient, which makes it an ideal choice for running computationally intensive tasks in the browser. Go is also known for its speed and efficiency, making it a good fit for building high-performance web applications.

Portability. Wasm is designed to be portable across different platforms and architectures. This means that you can compile Go code into WebAssembly format and run it on any platform that supports WebAssembly. This makes it easier to build web applications that work seamlessly across different devices and operating systems.

Security. WebAssembly provides a sandboxed environment for running code in the browser, which helps to prevent malicious code from accessing sensitive user data. Go also has built-in security features such as memory safety and type safety, which can help to prevent common security vulnerabilities.

Concurrency. Go is designed with concurrency in mind, which makes it easier to build web applications that can handle multiple requests simultaneously. By combining WebAssembly and Go, you can build web applications that are highly concurrent and can handle a large number of requests at the same time.

How WebAssembly Works with the Browser

When a Wasm module is loaded in a browser, it is executed by a virtual machine called the WebAssembly Runtime, which translates the Wasm code into machine code that the browser’s JavaScript engine can execute.

The WebAssembly Runtime is implemented in the browser as a JavaScript library and provides a set of APIs for loading, validating and executing Wasm modules. When a Wasm module is loaded, the Runtime validates the module’s bytecode and creates an instance of the module, which can be used to call its functions and access its data.

Wasm modules can interact with the browser’s Document Object Model (DOM) and other web APIs using JavaScript. For example, a Wasm module can modify the contents of a webpage, listen for user events, and make network requests using the browser’s web APIs.

One of the key benefits of using Wasm with the browser is that it provides a way to run code that is more performant than JavaScript. JavaScript is an interpreted language, which means that it can be slower than compiled languages like C++ or Go. However, by compiling code into Wasm format, it can be executed at near-native speeds, making it ideal for computationally intensive tasks such as machine learning or 3D graphics rendering.

Using WebAssembly with Go

The Go programming language has a compiler that can produce Wasm binaries, allowing Go programs to run in a web browser. The Go compiler for WebAssembly, called wasm, can be invoked using the GOARCH=wasm environment variable.

When compiling a Go program for WebAssembly, the Go compiler generates WebAssembly bytecode that can be executed in the browser using the WebAssembly Runtime. The generated Wasm module includes all of the Go runtime components needed to run the program, so no additional runtime support is required in the browser.

The Go compiler for WebAssembly supports the same set of language features as the regular Go compiler, including concurrency, garbage collection, and type safety. However, some Go features are not yet fully supported in WebAssembly, such as reflection and cgo.

Reflection. Reflection is a powerful feature in Go that allows programs to examine and manipulate their own types and values at runtime. However, due to the limitations of the Wasm runtime environment, reflection is not fully supported in Go programs compiled to WebAssembly. Some reflection capabilities may be limited or unavailable in WebAssembly binaries.

Cgo. The cgo tool in Go enables seamless integration with C code, allowing Go programs to call C functions and use C libraries. However, the cgo functionality is not currently supported in Go programs compiled to WebAssembly. This means that you cannot directly use cgo to interface with C code from WebAssembly binaries.

Technical Overview: How Wasm and Go Work Together

To compile Go code into WebAssembly format, you can use the Golang Wasm compiler. This tool generates a .wasm file that can be executed in a web browser. The compiler translates Go code into WebAssembly instructions that can be executed by a virtual machine in the browser.

Once you have the .wasm file, you need to load it into the browser using the WebAssembly JavaScript API. This API provides functions to load the module, instantiate it, and execute its functions.

You can load the .wasm file using the fetch() function, which loads the file as an ArrayBuffer. You can then instantiate the module using the WebAssembly.instantiate() function, which returns a Promise that resolves to a WebAssembly.Module object.

Calling Go Functions from JavaScript

After the WebAssembly module is loaded and instantiated, it exposes its functions to JavaScript. These functions can be called from JavaScript using the WebAssembly JavaScript API.

You can use the WebAssembly.instantiate() function to obtain a JavaScript object that contains the exported functions from the WebAssembly module. You can then call these functions from JavaScript just like any other JavaScript function.

Calling JavaScript Functions from Go

To call JavaScript functions from Go, you can use the syscall/js package. This package provides a way to interact with the JavaScript environment. You can create JavaScript values, call JavaScript functions, and handle JavaScript events from Go.

Use the js.Global() function to get the global object in the JavaScript environment. You can then call any function on this object using the Call() function, passing in the function name and any arguments.

The Golang WebAssembly API

The Golang WebAssembly API provides a set of functions that can be used to interact with WebAssembly modules from Go code running in a web browser. These functions allow Go programs to call functions defined in WebAssembly modules, pass values between Go and WebAssembly, and manipulate WebAssembly memory.

The Golang WebAssembly API is implemented as a set of Go packages, including “syscall/js,” which provides a bridge between Go and JavaScript, and “syscall/js/wasm,” which provides a bridge between Go and WebAssembly.

Using the Golang WebAssembly API, Go programs can load and instantiate Wasm modules, call functions defined in the modules, and manipulate the memory of the modules. For example, a Go program can load a Wasm module that performs complex computations, and then use the Golang WebAssembly API to call functions in the module and retrieve the results.

The Golang WebAssembly API also provides a way to define and export Go functions that can be called from WebAssembly modules. This allows Go programs to expose functionality to WebAssembly modules and provides a way to integrate Go code with existing JavaScript codebases.

Here’s a demonstration of how to compile a simple Go program to WebAssembly and load it in the browser

First, we need to install the Go compiler for WebAssembly. This can be done by running the following command:

GOARCH=wasm GOOS=js go get -u github.com/golang/go1.16.4


This will install the WebAssembly-enabled version of the Go compiler.

Next, we can write a simple Go program that adds two numbers together:

package main

import "fmt"

func add(a, b int) int {

  return a + b
}


func main() {
  fmt.Println("Hello from Go!")
}


We can then compile this program to WebAssembly by running the following command:

GOARCH=wasm GOOS=js go build -o add.wasm


This will generate a WebAssembly binary file called “add.wasm.”

Now we can write some JavaScript code to load and execute the WebAssembly module. Here’s an example

const go = new Go();

WebAssembly.instantiateStreaming(fetch('add.wasm'), 
go.importObject).then((result) => {
  go.run(result.instance);
  console.log("Result:", add(2, 3)); // call the 'add' function defined in the Go program
});


This code creates a new instance of the Go WebAssembly API, loads the add.wasm module using the WebAssembly API, runs the module, and then calls the add function defined in the Go program.

Finally, we can load our JavaScript code in a webpage and view the output in the browser console. For example:

<!DOCTYPE html>
<html>
 <head>
   <meta charset="utf-8">
   <title>Go + WebAssembly Example</title>
 </head>
 <body>
   <script src="wasm_exec.js"></script>
   <script>
     // insert JavaScript code here
   </script>
 </body>
</html>


This HTML file loads the wasm_exec.js file, which is included with the Go compiler for WebAssembly, and then includes our JavaScript code to load and execute the add.wasm module.

That’s it! With these steps, we can compile a simple Go program to WebAssembly and load it in a web browser using JavaScript. This provides a powerful way to build high-performance web applications with the simplicity and ease of use of the Go programming language.

How to Use Go with Various Wasm Frameworks

Here’s an overview of different WebAssembly frameworks that can be used with Go, including AssemblyScript (a TypeScript-like language that compiles to Wasm) and TinyGo (a variant of Go that compiles to WebAssembly and other embedded systems).

AssemblyScript

AssemblyScript provides a familiar syntax for web developers and can be used alongside Go to provide additional functionality to a web application. Here’s an example of how to use Go with AssemblyScript:

import * as go from "go";

const wasmModule = new WebAssembly.Module(await f
etch('add.wasm').then(response => response.arrayBuffer()));
const wasmInstance = new WebAssembly.Instance(wasmModule, go.importObject)

console.log(wasmInstance.exports.add(2, 3)); // Call the 'add' function defined in the Wasm module

await go.run(wasmInstance); // Start the Go runtime and call Go functions from JavaScript


In this example, we load the add.wasm module using the WebAssembly API and instantiate it with the Go import object. We then call the add function defined in the WebAssembly module and pass it two parameters. Finally, we start the Go runtime and call Go functions from JavaScript.

TinyGo

TinyGo provides a subset of the Go standard library and can be used to write low-level code that runs in the browser. Here’s an example of how to use TinyGo to call a function defined in a Go WebAssembly module:

package main

import "syscall/js"

func add(this js.Value, inputs []js.Value) interface{} {
  a := inputs[0].Int()
  b := inputs[1].Int()
  return a + b
}

func main() {
  c := make(chan struct{}, 0)
  js.Global().Set("add", js.FuncOf(add))
  <-c
}


In this example, we define a function called add that takes two integer parameters and returns their sum. We then use the “syscall/js” package to export this function to JavaScript. Finally, we block the main thread using a channel to prevent the Go program from exiting.

We can then call this function from JavaScript using the following code:

const wasmModule = await WebAssembly.instantiateStreaming(fetch('add.wasm'), 
go.importObject);
const go = new Go();


WebAssembly.instantiateStreaming(fetch('add.wasm'), 
go.importObject).then((result) => {
   go.run(result.instance);
   console.log("Result:", add(2, 3)); // call the 'add' function defined in the Go program
});


In this example, we instantiate the WebAssembly module and pass it to the Go runtime using the Go import object. We then run the Go program and call the add function defined in the Go program. The result is then printed to the console.

Using Wasm for Cross-Platform Development

WebAssembly code can be run in any environment that supports it, including browsers and standalone runtimes. Developers can use it to create applications that can run on multiple platforms with minimal code changes — fulfilling WebAssembly’s promise of “build once, run anywhere.” This can help to reduce development time and costs, while also providing a consistent user experience across different devices and platforms.

One way to use Wasm for cross-platform development is to build an application in a language that can be compiled to WebAssembly, such as Go or Rust. Once the application is built, it can be compiled to WebAssembly and deployed to the web, or compiled to native code and deployed to a desktop environment, using a framework like Electron or GTK.

Another way to use Wasm for cross-platform development is to build an application in a web-based language like JavaScript, and then compile it to WebAssembly using a tool like Emscripten. This approach can be especially useful for porting existing web applications to run on desktop environments, or for building applications that need to run on both the web and desktop.

Go programs can be compiled to both WebAssembly and native desktop environments using a number of different tools and frameworks.

For example, Electron is a popular framework for building cross-platform desktop applications using web technologies like HTML, CSS, and JavaScript. Go programs can be compiled to run on Electron using a tool like Go-Electron, which provides a way to package Go applications as Electron apps.

Another option is to use GTK, a popular cross-platform toolkit for building desktop applications. Go programs can be compiled to run on GTK using the gotk3 package, which provides Go bindings for GTK.

The post WebAssembly and Go: A Guide to Getting Started (Part 1) appeared first on The New Stack.

]]>
WebAssembly and Go: A Guide to Getting Started (Part 2) https://thenewstack.io/webassembly-and-go-a-guide-to-getting-started-part-2/ Mon, 12 Jun 2023 12:00:13 +0000 https://thenewstack.io/?p=22709677

WebAssembly (Wasm) and Golang (Go) are a dynamic duo for high-performance web applications due to their specific features and advantages.

The post WebAssembly and Go: A Guide to Getting Started (Part 2) appeared first on The New Stack.

]]>

WebAssembly (Wasm) and Golang (Go) are a dynamic duo for high-performance web applications due to their specific features and advantages. Wasm is a binary instruction format that allows running code at near-native speed in modern web browsers. It provides a low-level virtual machine that enables efficient execution of code, making it ideal for performance-intensive tasks.

Go is a statically typed, compiled programming language known for its simplicity, efficiency and high-performance characteristics. It offers built-in concurrency support, efficient memory management, and excellent execution speed. These qualities make Go a suitable language for developing backend systems that power web applications.

By combining WebAssembly and Go, developers can achieve exceptional performance in web applications. Go can be used to write backend services, APIs and business logic, while WebAssembly can be used to execute performance-critical code in the browser. This combination allows for offloading computation to the client-side, reducing server load and improving responsiveness.

Furthermore, Go has excellent interoperability with WebAssembly, allowing seamless integration between the two. Developers can compile Go code to WebAssembly modules, which can be executed in the browser alongside JavaScript, enabling the utilization of Go’s performance benefits on the client side.

Performance is of paramount importance in web applications for several reasons:

User experience. A fast and responsive web application enhances the user experience and satisfaction. Users expect web pages to load quickly and respond promptly to their interactions. Slow and sluggish applications can lead to frustration, abandonment and loss of users.

Conversion rates. Performance directly impacts conversion rates, especially in e-commerce and online businesses. Even minor delays in page load times can result in higher bounce rates and lower conversion rates, studies have shown. Improved performance can lead to increased engagement, longer session durations and higher conversion rates.

Search Engine Optimization (SEO). Search engines, like Google, take website performance into account when ranking search results. Faster-loading websites tend to have better search engine rankings, which can significantly impact organic traffic and visibility.

Mobile users. With the increasing use of mobile devices, performance becomes even more critical. Mobile networks can be slower and less reliable than fixed-line connections. Optimizing web application performance ensures a smooth experience for mobile users, leading to better engagement and retention.

Competitiveness. In today’s highly competitive digital landscape, performance can be a key differentiator. Users have numerous options available, and if your application is slow, they may switch to a competitor offering a faster and more efficient experience.

How Wasm Enhances Web Application Performance

Near-native performance. WebAssembly is designed to execute code at near-native speed. It achieves this by using a compact binary format that can be efficiently decoded and executed by modern web browsers. Unlike traditional web technologies like JavaScript, which are interpreted at runtime, Wasm code is compiled ahead of time and can be executed directly by the browser’s virtual machine, resulting in faster execution times.

Efficient execution. WebAssembly provides a low-level virtual machine that allows for efficient execution of code. It uses a stack-based architecture that minimizes the overhead associated with memory access and function calls. Additionally, WebAssembly operates on a compact binary format, reducing the size of the transmitted code and improving load times.

Multilanguage support. WebAssembly is designed to be language-agnostic, which means it can be used with a wide range of programming languages. This allows developers to leverage the performance benefits of Wasm while using their preferred programming language. By compiling code from languages like C, C++, Rust, and Go to WebAssembly, developers can take advantage of their performance characteristics and seamlessly integrate them into web applications.

Offloading computation. Wasm enables offloading computationally intensive tasks from the server to the client side. By moving certain operations to the browser, web applications can reduce the load on the server, distribute computation across multiple devices and improve overall responsiveness. This can be particularly beneficial for applications that involve complex calculations, image processing, simulations and other performance-critical tasks.

Seamless integration with JavaScript. WebAssembly can easily integrate with JavaScript, the traditional language of the web. This allows developers to combine the performance benefits of Wasm with the rich ecosystem of JavaScript libraries and frameworks. WebAssembly modules can be imported and exported from JavaScript code, enabling interoperability and smooth interaction between the two.

Progressive enhancement. Wasm supports a progressive enhancement approach to web development. Developers can choose to compile performance-critical parts of their application to WebAssembly while keeping the rest of the code in JavaScript. This way, the performance gains are selectively applied where they are most needed, without requiring a complete rewrite of the entire application.

WebAssembly vs. Other Web Technologies

WebAssembly outperforms JavaScript and asm.js in terms of execution speed. JavaScript is an interpreted language, while asm.js is a subset of JavaScript optimized for performance.

In contrast, WebAssembly executes at near-native speed, thanks to its efficient binary format and ahead-of-time (AOT) compilation. Wasm is language-agnostic, allowing developers to use multiple languages.

JavaScript has a larger developer community and mature tooling, while asm.js requires specific optimizations. WebAssembly binaries are smaller, resulting in faster load times. JavaScript has wider browser compatibility and seamless interoperability with web technologies.

WebAssembly requires explicit interfaces for interaction with JavaScript. Overall, Wasm offers high performance, while JavaScript has wider adoption and tooling support. Usage of asm.js has diminished with the rise of WebAssembly. The choice depends on performance needs, language preferences and browser support.

How Go Helps Create High-Performance Apps

Go is known for its key features that contribute to building high-performance applications. These features include:

Compiled language. Go compiles source code into efficient machine code, which results in fast execution and eliminates the need for interpretation at runtime. The compiled binaries can be directly executed by the operating system, providing excellent performance.

Concurrency support. The language has built-in support for concurrency through goroutines and channels. Goroutines are lightweight threads that allow concurrent execution of functions, while channels facilitate communication and synchronization between goroutines.

This concurrency model makes it easy to write highly concurrent and parallel programs, enabling efficient use of available resources and improving performance in scenarios like handling multiple requests or processing large amounts of data concurrently.

Efficient garbage collection, Go incorporates a garbage collector that automatically manages memory allocation and deallocation. It uses a concurrent garbage collector that minimizes pauses and allows applications to run smoothly without significant interruptions. The garbage collector efficiently reclaims unused memory, preventing memory leaks and enabling efficient memory management in high-performance applications.

Strong standard library. Go comes with a rich standard library that provides a wide range of functionalities, including networking, file I/O, encryption, concurrency primitives and more. The standard library is designed with performance and efficiency in mind, offering optimized implementations and well-designed APIs.

Developers can leverage these libraries to build high-performance applications without relying heavily on third-party dependencies.

Native support for concurrency patterns. Go provides native support for common concurrency patterns, such as mutexes, condition variables and atomic operations. These features enable developers to write thread-safe and efficient concurrent code without the complexities typically associated with low-level synchronization primitives.

This native support simplifies the development of concurrent applications and contributes to improved performance.

Efficient networking. Golang’s standard library includes a powerful networking package that offers efficient abstractions for building networked applications. It provides a robust set of tools for handling TCP/IP, UDP, HTTP, and other protocols. The networking capabilities of Go are designed to be performant, enabling the development of high-throughput and low-latency network applications.

Compilation to standalone binaries. Go can compile code into standalone binaries that contain all the necessary dependencies and libraries. These binaries can be easily deployed and executed on various platforms without requiring the installation of additional dependencies.

This approach simplifies deployment and can contribute to better performance by reducing overhead and ensuring consistent execution environments.

Using Wasm for Computationally Intensive Tasks

Wasm can greatly improve the performance of computationally intensive tasks like image processing or cryptography by leveraging its near-native execution speed. By compiling algorithms or libraries written in languages like C/C++ or Rust to WebAssembly, developers can achieve significant performance gains.

WebAssembly’s efficient binary format and ability to execute in a sandboxed environment make it ideal for running computationally intensive operations in the browser.

Go programs can benefit from improved performance when compiled to Wasm for computationally intensive tasks. For example, Go libraries or applications that involve heavy image manipulation, complex mathematical calculations or cryptographic operations can be compiled to WebAssembly to take advantage of its speed.

Using WebAssembly for UI Rendering

WebAssembly can improve UI rendering performance in the browser compared to traditional JavaScript approaches. By leveraging Wasm’s efficient execution and direct access to low-level operations, rendering engines can achieve faster updates and smoother animations.

WebAssembly allows UI rendering code to run closer to native speeds, resulting in improved user experiences, especially for complex or graphically intensive applications.

UI frameworks or libraries like React or Vue.js can benefit from improved performance when compiled to WebAssembly. By leveraging the speed and efficiency of Wasm, these frameworks can deliver faster rendering and more responsive user interfaces. Compiling UI components written in languages like Rust or C++ to WebAssembly can enhance the overall performance and responsiveness of the UI, making the user experience more seamless and interactive.

Using WebAssembly for Game Development

WebAssembly’s efficient execution and direct access to hardware resources make it ideal for browser-based game development. It offers improved performance compared to traditional JavaScript game engines. By compiling game logic and rendering code to WebAssembly, developers can achieve near-native speeds, enabling complex and visually rich games to run smoothly in the browser.

Go-based game engines like Azul3D can benefit from improved performance when compiled to WebAssembly. By leveraging the speed and efficiency of Wasm, Go game engines can deliver high-performance browser games with advanced graphics and physics simulations.

Compiling Go-based game engines to WebAssembly enables developers to harness Go’s performance characteristics and create immersive gaming experiences that rival native applications.

The Power of Go and WebAssembly: Case Studies

TinyGo

TinyGo is a project that compiles Go code to WebAssembly for running on resource-constrained devices and in the browser. It showcases the performance gains of combining Go with Wasm for scenarios where efficiency and low resource usage are crucial.

Wasmer

Wasmer is an open-source runtime for executing WebAssembly outside the browser. It supports running Go code as WebAssembly modules. Wasmer’s performance benchmarks have demonstrated that Go code executed as Wasm can achieve comparable or better performance than JavaScript in various scenarios.

Vecty

Vecty is a web framework for building responsive and dynamic frontends in Go using WebAssembly. It aims to compete with modern web frameworks like React and VueJS. Here are some key features of Vecty:

  • Simplicity. Vecty is designed to be easily mastered by newcomers, especially those familiar with the Go programming language. It follows Go’s philosophy of simplicity and readability.
  • Performance. Vecty focuses on providing efficient and understandable performance. It aims to generate small bundle sizes, resulting in faster loading times for your web applications. Vecty strives to achieve the same performance as raw JavaScript, HTML  and CSS.
  • Composability. Vecty allows you to nest components, enabling you to build complex user interfaces by logically separating them into smaller, reusable packages. This composability promotes code reusability and maintainability.
  • Designed for Go. Vecty is specifically designed for Go developers. Instead of translating popular libraries from other languages to Go, Vecty was built from the ground up, asking the question, “What is the best way to solve this problem in Go?” This approach ensures that Vecty leverages Go’s unique strengths and idioms.

Best Practices: Developing Web Apps with Wasm and Go

Optimize Go Code for WebAssembly

Minimize memory allocations. Excessive memory allocations can impact performance. Consider using object pooling or reusing memory to reduce the frequency of allocations and deallocations.

Use efficient data structures. Choose data structures that are optimized for performance. Go provides various built-in data structures like slices and maps that are efficient for most use cases.

Limit garbage collection pressure. Excessive garbage collection can introduce pauses and affect performance. Minimize unnecessary object allocations and use the appropriate garbage collection settings to optimize memory management.

Optimize loops and iterations. Identify loops and iterations that can be optimized. Use loop unrolling, minimize unnecessary calculations and ensure efficient memory access patterns.

Leverage goroutines and channels. Go’s concurrency primitives, goroutines, and channels, can help maximize performance. Use them to parallelize tasks and efficiently handle concurrent operations.

Maximize Performance in Wasm Modules

Minimize startup overhead. Reduce the size of the WebAssembly module by eliminating unnecessary code and dependencies. Minify and compress the module to minimize download time.

Optimize data transfers. Minimize data transfers between JavaScript and Wasm modules. Use efficient memory layouts and data representations to reduce serialization and deserialization overhead.

Use SIMD instructions. If applicable, use single instruction, multiple data (SIMD) instructions to perform parallel computations and improve performance. SIMD can be especially beneficial for tasks involving vector operations.

Profile and optimize performance-critical code. Identify performance bottlenecks by profiling the WebAssembly module. Optimize the hot paths, critical functions and sections that consume significant resources to improve overall performance.

Use compiler and optimization flags. Use compiler-specific flags and optimizations tailored for WebAssembly. Different compilers may have specific optimizations to improve performance for Wasm targets.

Minimize Latency and Improve Responsiveness

Reduce round trips. Minimize the number of network requests by combining resources, utilizing caching mechanisms, and employing efficient data transfer protocols like HTTP/2 or WebSockets.

Do asynchronous operations. Use asynchronous programming techniques to avoid blocking the main thread and enhance responsiveness. Employ callbacks, Promises, or async/await syntax for non-blocking I/O operations.

Employ lazy loading and code splitting. Divide the application into smaller modules and load them on-demand as needed. Lazy loading and code splitting reduce the initial load time and improve perceived performance.

Use efficient DOM manipulation. Optimize Document Object Model (DOM) manipulation operations by batching changes and reducing layout recalculations. Use techniques like virtual DOM diffing to minimize updates and optimize rendering.

Rely on caching and prefetching. Leverage browser caching mechanisms and prefetching to proactively load resources that are likely to be needed, reducing latency and improving perceived performance.

The post WebAssembly and Go: A Guide to Getting Started (Part 2) appeared first on The New Stack.

]]>
How WASM (and Rust) Unlocks the Mysteries of Quantum Computing https://thenewstack.io/how-wasm-and-rust-unlocks-the-mysteries-of-quantum-computing/ Thu, 08 Jun 2023 10:00:40 +0000 https://thenewstack.io/?p=22709920

WebAssembly has come a long way from the browser; it can be used for building high-performance web applications, for serverless

The post How WASM (and Rust) Unlocks the Mysteries of Quantum Computing appeared first on The New Stack.

]]>

WebAssembly has come a long way from the browser; it can be used for building high-performance web applications, for serverless applications, and for many other uses.

Recently, we also spotted it as a key technology used in creating and controlling a previously theoretical state of matter that could unlock reliable quantum computing — for the same reasons that make it an appealing choice for cloud computing.

Quantum Needs Traditional Computing

Quantum computing uses exotic hardware (large, expensive and very, very cold) to model complex systems and problems that need more memory than the largest supercomputer: it stores information in equally exotic quantum states of matter and runs computations on it by controlling the interactions of subatomic particles.

But alongside that futuristic quantum computer, you need traditional computing resources to feed data into the quantum system, to get the results back from it — and to manage the state of the qubits to deal with errors in those fragile quantum states.

As Dr. Krysta Svore, the researcher heading the team building the software stack for Microsoft’s quantum computing project, put it in a recent discussion of hybrid quantum computing, “We need 10 to 100 terabytes a second bandwidth to keep the quantum machine alive in conjunction with a classical petascale supercomputer operating alongside the quantum computer: it needs to have this very regular 10 microsecond back and forth feedback loop to keep the quantum computer yielding a reliable solution.”

Qubits can be affected by what’s around them and lose their state in microseconds, so the control system has to be fast enough to measure the quantum circuit while it’s operating (that’s called a mid-circuit measurement), find any errors and decide how to fix them — and send that information back to control the quantum system.

“Those qubits may need to remain alive and remain coherent while you go do classical compute,” Svore explained. “The longer that delay, the more they’re decohering, the more noise that is getting applied to them and thus the more work you might have to do to keep them stable and alive.”

Fixing Quantum Errors with WASM

There are different kinds of exotic hardware in quantum computers and you have a little more time to work with a trapped-ion quantum computer like the Quantinuum System Model H2, which will be available through the Azure Quantum service in June.

That extra time means the algorithms that handle the quantum error correction can be more sophisticated, and WebAssembly is the ideal choice for building them Pete Campora, a quantum compiler engineer at Quantinuum, told the New Stack.

Over the last few years, Quantinuum has used WebAssembly (WASM) as part of the control system for increasingly powerful quantum computers, going from just demonstrating that real-time quantum error correction is possible to experimenting with different error correction approaches and, most recently, creating and manipulating for the first time the exotic entangled quantum states (called non-Abelian anyons) that could be the basis of fault-tolerant quantum computing.

Move one of these quasiparticles around another — like braiding strings — and they store that sequence of movements in their internal state, forming what’s called a topological qubit that’s much more error resistant than other types of qubit.

At least, that’s the theory: and WebAssembly is proving to be a key part of proving it will work — which still needs error correction on today’s quantum computers.

“We’re using WebAssembly in the middle of quantum circuit execution,” Campora explained. The control system software is “preparing quantum states, doing some mid-circuit measurements, taking those mid-circuit measurements, maybe doing a little bit of classical calculation in the control system software and then passing those values to the WebAssembly environment.”

Controlling Quantum Circuits

In cloud, developers are used to picking the virtual machine with the right specs or choosing the right accelerator for a workload.

Rather than picking from fixed specs, quantum programming can require you to define the setup of your quantum hardware, describing the quantum circuit that will be formed by the qubits and as well as the algorithm that will run on it — and error-correcting the qubits while the job is running — with a language like OpenQASM (Open Quantum Assembly Language); that’s rather like controlling an FPGA with a hardware description language like Verilog.

You can’t measure a qubit to check for errors directly while it’s working or you’d end the computation too soon, but you can measure an extra qubit (called an “ancilla” because it’s used to store partial results) and extrapolate the state of the working qubit from that.

What you get is a pattern of measurements called a syndrome. In medicine, a syndrome is a pattern of symptoms used to diagnose a complicated medical condition like fibromyalgia. In quantum computing, you have to “diagnose” or decode qubit errors from the pattern of measurements, using an algorithm that can also decide what needs to be done to reverse the errors and stop the quantum information in the qubits from decohering before the quantum computer finishes running the program.

OpenQASM is good for basic integer calculation, but it requires a lot of expertise to write that code: “There’s a lot more boilerplate than if you just call out to a nice function in WASM.”

Writing the algorithmic decoder that uses those qubit measurements to work out what the most likely error is and how to correct it in C, C++ or Rust and compiling it to WebAssembly makes it more accessible and lets the quantum engineers use more complex data structures like vectors, arrays, tuples and other ways to pass data between different functions to write more sophisticated algorithms that deliver more effective quantum error correction.

“An algorithmic decoder is going to require data structures beyond what you would reasonably try to represent with just integers in the control system: it just doesn’t make sense,” Campora said. “The WASM environment does a lot of the heavy lifting of mutating data structures and doing these more complex algorithms. It even does things like dynamic allocation that normally you’d want to avoid in control system software due to timing requirements and being real time. So, the Rust programmer can take advantage of Rust crates for representing graphs and doing graph algorithms and dynamically adding these nodes into a graph.”

The first algorithmic decoder the Quantinuum team created in Rust and compiled to WASM was fairly simple: “You had global arrays or dictionaries that mapped your sequence of syndromes to a result.” The data structures used in the most recent paper are more complex and quantum engineers are using much more sophisticated algorithms like graph traversal and Dijkstra’s [shortest path] algorithm. “It’s really interesting to see our quantum error correction researchers push the kinds of things that they can write using this environment.”

Enabling software that’s powerful enough to handle different approaches to quantum error correction makes it much faster and more accessible for researchers to experiment than if they had to make custom hardware each time, or even reprogram an FPGA, especially for those with a background in theoretical physics (with the support of the quantum compiler team if necessary). “It’s portable, and you can generate it from different languages, so that frees people up to pick whatever language and software that can compile to WASM that’s good for their application.”

“It’s definitely a much easier time for them to get spun up trying to think about compiling Rust to WebAssembly versus them having to try and program an FPGA or work with someone else and describe their algorithms. This really allows them to just go and think about how they’re going to do it themselves,” Campora said.

Sandboxes and System Interfaces

With researchers writing their own code to control a complex — and expensive — quantum system, protecting that system from potentially problematic code is important and that’s a key strength of WebAssembly, Campora noted. “We don’t have to worry about the security concerns of people submitting relatively arbitrary code, because the sandbox enforces memory safety guarantees and basically isolates you from certain OS processes as well.”

Developing quantum computing takes the expertise of multiple disciplines and both commercial and academic researchers, so there are the usual security questions around code from different sources. “One of the goals with this environment is that, because it’s software, external researchers that we’re collaborating with can write their algorithms for doing things like decoders for quantum error correction and can easily tweak them in their programming language and resubmit and keep re-evaluating the data.”

A language like Portable C could do the computation, “but then you lose all of those safety guarantees,” Campora pointed out. “A lot of the compilation tooling is really good about letting you know that you’re doing something that would require you to break out of the sandbox.”

WebAssembly restricts what a potentially malicious or inexpert user could do that might damage the system but also allows system owners to offer more capabilities to users who need them, using WASI — the WebAssembly System Interface that standardizes access to features and services that aren’t in the WASM sandbox.

“I like the way WASI can allow you, in a more fine-grained way, to opt into a few more things that would normally be considered breaking the sandbox. It gives you control. If somebody comes up to you with a reasonable request that that would be useful for, say, random number generation we can look into adding WASI support so that we can unblock them, but by default, they’re sandboxed away from OS things.”

In the end, esoteric as the work is, the appeal of WebAssembly for quantum computing error correction is very much what makes it so useful in so many areas.

“The web part of the name is almost unfortunate in certain ways,” Camora noted, “because it’s really this generic virtual machine-stack machine-sandbox, so it can be used for a variety of domains. If you have those sandboxing needs, it’s really a great target for you to get some safety guarantees and still allows people to submit code to it.”

The post How WASM (and Rust) Unlocks the Mysteries of Quantum Computing appeared first on The New Stack.

]]>
The Need to Roll up Your Sleeves for WebAssembly https://thenewstack.io/the-need-to-roll-up-your-sleeves-for-webassembly/ Mon, 05 Jun 2023 13:00:41 +0000 https://thenewstack.io/?p=22706865

We already know how putting applications in WebAssembly modules can improve runtime performance and latency speeds and compatibility when deployed.

The post The Need to Roll up Your Sleeves for WebAssembly appeared first on The New Stack.

]]>

We already know how putting applications in WebAssembly modules can improve runtime performance and latency speeds and compatibility when deployed. We also know that WebAssembly has been used to improve application performance when running on the browser on the backend. But the day when developers can create applications in the language of their choice for distribution across any environment simultaneously, whether it’s on Kubernetes clusters, servers, edge devices, etc. remains a work in progress.

This status quo became that much more apparent from the talks and impromptu meetings I had during KubeCon + CloudNativeCon in April. In addition to a growing number of WebAssembly module and service providers and startups offering support for WebAssembly, it’s hard to find any organization that is not getting down to work to at least see how it works as a sandbox project in wait of when customers will ask for or require it.

Many startups, established players and tool and platform providers are actively contributing to the common pool of knowledge by contributing or maintaining open source projects, taking part in efforts such as the ByteCode Alliance or sharing their knowledge and experiences at conferences, such as during the KubeCon + CloudNativeCon Europe’s co-located event Cloud Native Wasm Day. This collective effort will very likely serve as a catalyst so that WebAssembly will eventually soon move past its current status as just a very promising new technology and begin to be used for what it’s intended for on a massive industry scale.

Indeed, WebAssembly is the logical next step in the evolution from running applications on specific hardware, running them on virtual machines, to running them in containers on Kubernetes, Torsten Volk, an analyst at Enterprise Management Associates (EMA), said. “The payout in terms of increased developer productivity alone justifies the initial investments that come with achieving this ultimate level of abstraction between code and infrastructure. No more library hell: No more debugging app-specific infrastructure. No more refactoring of app code for edge deployments. In general, no more wasting developer time on stuff other than writing code,” Volk said. “This will get us to a state where we can truly compose new applications from existing components without having to worry about compatibility.”

 

Work to Be Done

But until we get that point of developer-productivity nirvana, work needs to be done. “Now we need all-popular Python libraries to work on WebAssembly and integrations with key components of modern distributed apps, such as NoSQL storage, asynchronous messaging, distributed tracing, caching, etc.,” Volk said. “Luckily there’s a growing number of startups completing the ‘grunt work’ for us to make 2024 the year when WebAssembly really takes off in production.”

Collaboration, alliances and harmony in the community, especially in the realm of open source, will be critical. “The one thing I’ve learned from the container wars is that we were fighting each other too early in the process. There was this mindset that the winner would take all, but the truth is the winner takes all the burden,” Kelsey Hightower, principal developer advocate, Google Cloud, said during the opening remarks at KubeCon + CloudNativeCon Europe’s Cloud Native Wasm Day. “You will be stuck trying to maintain the standards on behalf of everyone else. Remember collaboration is going to be super important — because the price for this has to be this invisible layer underneath that’s just doing all of this hard work.”

At the end of the day, those writing software probably just want to use their favorite language and framework in order to do it, Hightower said. “How compatible will you be with that? Or will we require them to rewrite all the software?” Hightower said. “My guess is anything that requires people to rewrite everything is doomed to fail, almost guaranteed and that there is no way that the world is going to stop innovating at the pace we’re on where the world will stop, and implement all the lower levels. So, it is a time to be excited, but understand what the goal is and make sure that this thing is usable and has tangible results along the way.”

During the sidelines of the conference, Peter Smails, senior vice president and general manager, enterprise container management, at SUSE, discussed how internal teams at SUSE shared an interest in Wasm without going into details about SUSE’s involvement. “WebAssembly has an incredibly exciting future and we see practical application of WebAssembly. I personally think of it as similar to being next-generation Java: it is a small, lightweight, fast development platform and, arguably, is an infrastructure that lets you write code and deploy it where you want and that’s pretty cool,” Smails told The New Stack.

In many ways, WebAssembly proponents face the chicken-before-the-egg challenges. After all, what developer would not want to be able to use the programming language of their choice to deploy applications for an environment or device without having to worry about configuration issues? What operations and security team would not appreciate a single path of deployment from finalized application code to deployment on any device or environment (including Kubernetes) security without the hassles of reconfiguring the application for each endpoint?  But we are not there yet and many risks must be taken and investments made before wide-scale adoption really does happen the way it should in theory.

“We have a lot of people internally very excited about it, but practically speaking, we don’t have customers coming to talk about this asking for the requirements — that’s why it’s in the future,” Smails said. “We see it more as a potentially exciting space because we’re all about infrastructure.”

Get the Job Done

Meanwhile, there is a huge momentum to create, test and standardize the Wasm infrastructure to pave the way for mass adoption. This is thanks largely to the work of the open source community working on projects sponsored in-house or among new tool providers startups that continue to sprout up, as mentioned above. Among the more promising projects discussed during  the KubeCon + CloudNativeCon co-located event Cloud Native Wasm Day, Saúl Cabrera, a staff developer, for Shopify, described how he is leading the development of Winch during his talk “The Road to Winch.” Winch is a compiler in Wasmtime created to improve application performance beyond what Wasm already provides. Offering an alternative to overcome the limitations of a baseline compiler, WebAssembly Intentionally-Non Optimizing Compiler and Host (Winch) improves startup times of WebAssembly applications, Cabrera said. Benchmarks result that demonstrates the touted performance metrics will be available in the near future, Cabrera said.

The post The Need to Roll up Your Sleeves for WebAssembly appeared first on The New Stack.

]]>
Python and WebAssembly: Elevating Performance for Web Apps https://thenewstack.io/python-and-webassembly-elevating-performance-for-web-apps/ Mon, 05 Jun 2023 10:00:33 +0000 https://thenewstack.io/?p=22709558

Python developers have long appreciated the language’s versatility and productivity. However, concerns persist about Python’s performance limitations and seamless integration

The post Python and WebAssembly: Elevating Performance for Web Apps appeared first on The New Stack.

]]>

Python developers have long appreciated the language’s versatility and productivity. However, concerns persist about Python’s performance limitations and seamless integration with other languages.

The emergence of WebAssembly (Wasm) bridges this gap. Wasm empowers Python users to explore new frontiers of speed, compatibility and language interoperability.

In this article, we’ll delve into the world of WebAssembly and its relevance for Python enthusiasts. We will explore how Wasm propels Python applications to near-native performance levels, extends their capabilities across platforms and ecosystems, and unlocks a plethora of possibilities for web-based deployments.

WebAssembly simplifies the deployment of Python applications on the web. By compiling Python code into a format that can be executed directly in the browser, developers can seamlessly deliver their Python applications to a wide range of platforms without the need for complex setup or server-side processing.

The combination of Wasm and Python empowers developers to build high-performance web applications, leverage existing Python code and libraries, and explore new domains where Python’s productivity and versatility shinesponsored-

The Benefits of Using WebAssembly with Python

Wasm brings a plethora of benefits when combined with Python, revolutionizing the way developers can leverage the language. Let’s explore some of the key advantages of using WebAssembly with Python:

Enhanced performance. Python, while highly expressive and easy to use, has traditionally been criticized for its relatively slower execution speed compared to low-level languages. By using WebAssembly, Python code can be compiled into highly optimized, low-level binary code that runs at near-native speed, significantly enhancing application performance and reducing network latency.

This performance boost allows Python developers to tackle computationally intensive tasks, process large datasets or build real-time applications with enhanced responsiveness.

Language interoperability. WebAssembly provides a seamless integration pathway between Python and other languages like C++, Rust, and Go. By leveraging WebAssembly’s interoperability features, Python developers can tap into the vast ecosystem of libraries and tools available in these languages.

This empowers developers to harness the performance and functionality of existing codebases, extending Python’s capabilities and enabling them to build sophisticated applications with ease.

Platform independence. Wasm is not limited to the web browser environment. It offers a cross-platform runtime, making it possible to execute Python code on a wide range of devices and operating systems.

This cross-platform compatibility enables Python developers to target desktop applications, mobile apps, Internet of Things (IoT) devices, and more, using a unified codebase. It reduces development efforts, simplifies maintenance and expands the reach of Python applications to diverse computing environments.

Web deployment. WebAssembly has gained significant traction as a deployment format for web applications. By compiling Python code to WebAssembly, developers can directly execute Python in the browser, eliminating the need for server-side execution or transpiling Python to JavaScript.

This opens up exciting possibilities for building web applications entirely in Python, with seamless client-side interactivity and reduced server-side load.

Performance critical components. Wasm is an excellent choice for integrating performance-critical components or algorithms into Python applications.

By offloading computationally intensive tasks to WebAssembly modules written in languages like Rust or C, developers can achieve significant performance improvements without sacrificing the productivity and ease of use provided by Python.

This hybrid approach combines the best of both worlds, leveraging Python’s high-level abstractions with the speed and efficiency of low-level code.

A growing ecosystem and tooling. The WebAssembly ecosystem is rapidly evolving, with a thriving community and an expanding range of tools, libraries and frameworks. Python developers can tap into this vibrant ecosystem to compile, optimize and run their code in Wasm.

The availability of tooling makes adoption easier and ensures developers have the necessary resources to harness the power of WebAssembly effectively.

7 Steps to Compile Python Code to Wasm

What follows are general steps to compile Python code to WebAssembly; the exact process and tools may vary depending on the specific compiler and configuration you choose. Refer to the documentation and resources provided by the compiler you’re using for detailed instructions and best practices.

Additionally, keep in mind that not all Python code may be suitable for compilation to WebAssembly, especially if it relies heavily on features that are not supported in the Wasm environment or if it requires extensive access to system resources.

  1. Choose a WebAssembly compiler. There are several compilers available that can convert Python code to WebAssembly. One popular option is Emscripten, which provides a toolchain for compiling code written in C/C++ to WebAssembly, including Python through the CPython interpreter.
  2. Set up the development environment. Install the necessary dependencies and tools for the chosen compiler. This typically includes Python, a C/C++ compiler, and the WebAssembly compiler itself (such as Emscripten or Pyodide). Pyodide is a full Python environment that runs entirely in the browser, while Emscripten is a toolchain for compiling C and C++ code to Wasm.
  3. Prepare your Python code. Ensure that your Python code is compatible with the compiler. It’s essential to avoid using Python features or libraries that are not supported by the WebAssembly environment, as it has limited access to certain system resources.
  4. Compile Python to WebAssembly. Use the chosen compiler to translate the Python code into WebAssembly. The specific command or process will depend on the compiler you’re using. For example, with Emscripten, you would typically invoke the compiler with the necessary flags and options, specifying the Python source files as input.
  5. Optimize the WebAssembly output. After compiling, you may need to optimize the resulting Wasm code to improve performance and reduce the file size. The compiler may offer optimization flags or options to leverage to achieve this.
  6. Integrate with JavaScript. WebAssembly modules are typically loaded and interacted with through JavaScript. You will need to write JavaScript code that interacts with the compiled Wasm module, providing an interface for calling functions, passing data and handling the Python code’s results.
  7. Test and deploy. Once the compilation and integration steps are complete, test the WebAssembly module in various environments and scenarios to ensure it behaves as expected. You can then deploy the Wasm module to the desired target, such as a web server or an application that supports WebAssembly execution.

Loading and Executing Wasm Modules in Python

Remember that the specific steps and syntax needed for loading and executing WebAssembly modules in Python may vary depending on which Wasm interface library you’ve chosen. Again, refer to the documentation and resources provided by the library you’re using for detailed instructions and examples.

Choose a WebAssembly interface. Select a Python library or package that provides the necessary functionality for loading and executing WebAssembly modules. Some popular options include wasmtime, pywasm, and pyodide.

Install the required libraries. Install the chosen WebAssembly interface library using a package manager like pip. For example, you can install wasmtime by running $ pip install wasmtime.

Load the WebAssembly module. Use the WebAssembly interface library to load the Wasm module into your Python environment. Typically, you will provide the path to the WebAssembly module file as input to the loading function.

For instance, with wasmtime, you can use the wasmtime.wat2wasm() function to load a WebAssembly module from a WebAssembly Text Format (WAT) file.

Create an instance. Once the Wasm module is loaded, you need to create an instance of it to execute its functions. This step involves invoking a function provided by the WebAssembly interface library, and passing the loaded module as a parameter.

The exact function and syntax may vary depending on the chosen library. For example, in wasmtime, you can use the wasmtime.Instance() constructor to create an instance.

Call WebAssembly functions. After creating the instance, you can access and call functions defined within the WebAssembly module. The Wasm interface library typically provides methods or attributes to access these functions.

You can invoke the functions using the instance object, passing any required arguments. The return values can be retrieved from the function call. The specific syntax and usage depend on the chosen library.

Handle data interchange. WebAssembly modules often require exchanging data between Python and the Wasm environment. This can involve passing arguments to WebAssembly functions or retrieving results back to Python.

The Wasm interface library should provide mechanisms or functions to handle data interchange between Python and WebAssembly. This may include converting data types or handling memory management.

Handle errors and exceptions. When working with WebAssembly modules, it’s important to handle errors and exceptions gracefully. The chosen WebAssembly interface library should provide error-handling mechanisms or exception classes to catch and handle any potential errors or exceptions that may occur during module loading or function execution.

Test and iterate. Once the initial integration is complete, test the loaded WebAssembly module and its functions within your Python environment. Verify that the module executes as expected, produces the desired results, and handles edge cases appropriately. Iterate and refine your code as necessary.

Wasm and Python Use Cases across Different Domains

Scientific Simulations

Python is widely used in scientific computing, and WebAssembly can bring its computational capabilities to the web. For example, you can compile scientific simulation code written in Python to Wasm and run it directly in the browser.

This enables interactive and visually appealing web-based simulations, allowing users to explore scientific concepts without the need for server-side processing. Libraries like NumPy and SciPy can be utilized in combination with WebAssembly to achieve high-performance scientific simulations in the browser.

Machine Learning Models

Python is renowned for its rich ecosystem of machine learning libraries like TensorFlow, PyTorch, and Scikit-learn. With WebAssembly, you can compile trained machine learning models built in Python and deploy them in the browser or other environments.

This allows for client-side inference and real-time prediction capabilities without relying on server-side APIs. WebAssembly’s performance benefits enable efficient execution of complex models, empowering developers to create browser-based machine learning applications.

Web-Based Games

Python is increasingly used for game development due to its simplicity and versatility. By leveraging WebAssembly, Python game developers can bring their creations to the web without sacrificing performance.

By compiling game logic written in Python to Wasm, developers can create browser-based games with near-native speed and interactivity. Libraries like Pygame and Panda3D, when combined with WebAssembly, provide a powerful platform for cross-platform game development.

Web User Interfaces

Python developers can leverage Wasm to create rich, responsive UIs for web applications. By compiling Python UI frameworks or components to WebAssembly, such as Pywebview or BeeWare, developers can build browser-based UIs that offer the simplicity and productivity of Python. This allows for a seamless user experience while retaining the power and expressiveness of Python for developing complex web applications.

Data Processing and Visualization

Python’s data processing and visualization libraries, such as Pandas, Matplotlib, and Plotly, can be used in conjunction with WebAssembly to perform data analysis and generate interactive visualizations directly in the browser.

By compiling Python code to Wasm, developers can create web applications that handle large datasets and provide real-time visualizations without the need for server-side computation.

The post Python and WebAssembly: Elevating Performance for Web Apps appeared first on The New Stack.

]]>
Demystifying WebAssembly: What Beginners Need to Know https://thenewstack.io/webassembly/webassembly-what-beginners-need-to-know/ Fri, 02 Jun 2023 12:35:19 +0000 https://thenewstack.io/?p=22708617

WebAssembly (Wasm) is a binary format that was designed to enhance the performance of web applications. It was created to

The post Demystifying WebAssembly: What Beginners Need to Know appeared first on The New Stack.

]]>

WebAssembly (Wasm) is a binary format that was designed to enhance the performance of web applications. It was created to address the limitations of JavaScript, an interpreted language that can lead to slower performance and longer page load times.

With WebAssembly, developers can compile code to a low-level binary format that can be executed by modern web browsers at near-native speeds. This can be particularly useful for applications that require intensive computation or need to process large amounts of data.

Compiling code to Wasm requires some knowledge of the programming language and tools being used, as well as an understanding of the WebAssembly format and how it interacts with the browser environment. However, the benefits of improved performance and security make it a worthwhile endeavor for many developers.

In this article, we will explore the basics of WebAssembly, including how it works with web browsers, how to compile code to Wasm, and best practices for writing secure WebAssembly code.

We will also discuss benchmarks and examples that illustrate the performance benefits of using WebAssembly compared to traditional web technologies. You will learn how WebAssembly can be used to create faster, more efficient and more secure web applications.

The Benefits of Using WebAssembly

As mentioned previously, WebAssembly offers faster execution times and improved performance compared to JavaScript, due to its efficient binary format and simpler instruction set. It enables developers to use other languages to create web applications, such as C++, Rust, and others.

Wasm also provides a more secure environment for running code on the web. In addition to performance, there are several other benefits to using it in web development:

Portability. Wasm is designed to be language-agnostic and can be used with multiple programming languages, enabling developers to write code in their preferred language and compile it to WebAssembly for use on the web.

Security. It provides a sandboxed environment for executing code, making it more secure than executing untrusted code directly in the browser.

Interoperability. Wasm modules can be easily integrated with JavaScript, allowing developers to use existing libraries and frameworks alongside new WebAssembly modules.

Accessibility. It can be used to bring applications written in native languages to the web, making them more accessible to users without requiring them to install additional software.

WebAssembly can be represented in two forms: binary format and textual format.

The binary format is Wasm’s native format, consisting of a sequence of bytes that represent the program’s instructions and data. This binary format is designed to be compact, efficient and easily parsed by machines. The binary format is also the form that is typically transmitted over the network when a Wasm program is loaded into a web page.

The textual representation of WebAssembly, on the other hand, is a more human-readable form that is similar to assembly language. The textual format is designed to be more readable, and easier to write and debug, than the binary format. The textual format consists of a series of instructions, each represented using a mnemonic and its operands, and it can be translated to the binary format using a WebAssembly compiler.

The textual format can be useful for writing and debugging Wasm programs, as it allows developers to more easily read and understand the program’s instructions. Additionally, the textual format can be used to write programs in high-level programming languages that can then be compiled to WebAssembly, which can help to simplify the process of writing and optimizing Wasm programs.

What Is the WebAssembly Instruction Set?

WebAssembly has a simple, stack-based instruction set that is designed to be easy to optimize for performance. It supports basic types such as integers and floating-point numbers, as well as more complex data structures such as vectors and tables.

The Wasm instruction set consists of a small number of low-level instructions that can be used to build more complex programs. These instructions can be used to manipulate data types such as integers, floats and memory addresses, and to perform control flow operations such as branching and looping.

Some examples of WebAssembly instructions include

  • i32.add: adds two 32-bit integers together.
  • f64.mul: multiplies two 64-bit floating-point numbers together.
  • i32.load: loads a 32-bit integer from memory.
  • i32.store: stores a 32-bit integer into memory.
  • br_if: branches to a given label if a condition is true.

WebAssembly instructions operate on a stack-based virtual machine, where values are pushed onto and popped off of a stack as instructions are executed. For example, the i32.add instruction pops two 32-bit integers off the stack, adds them together, and then pushes the result back onto the stack.

This is significant because it improves the efficiency and simplicity of execution.

A stack-based architecture allows for the efficient execution of instructions. Since values are pushed onto the stack, instructions can easily access and operate on the topmost values without the need for explicit addressing or complex memory operations. This reduces the number of instructions needed to perform computations, resulting in faster execution.

Also, the stack-based model simplifies the design and implementation of the virtual machine. Instructions can be designed to work directly with values on the stack, eliminating the need for complex register management or memory addressing modes. This simplicity leads to a more compact and easier-to-understand instruction set.

The small number of instructions in the WebAssembly instruction set makes it easy to optimize and secure. Because the instructions are low-level, they can be easily translated into machine code, making Wasm programs fast and efficient.

Additionally, the fixed instruction set means that those programs are not prone to the same types of security vulnerabilities that can occur in more complex instruction sets.

How Does Wasm Work with the Browser?

WebAssembly code is loaded and executed within the browser’s sandboxed environment. It is typically loaded asynchronously using the fetch() API and then compiled and executed using the WebAssembly API.

Wasm can work with web browsers to provide efficient and secure execution of code in the client-side environment. Its code can be loaded and executed within a web page using JavaScript, and can interact with the Document Object Model (DOM) and other web APIs.

When a web page loads a WebAssembly module, the browser downloads the module’s binary file and compiles it to machine code using a virtual machine called the WebAssembly Runtime. The WebAssembly Runtime is integrated into the browser’s JavaScript engine and translates the Wasm code into machine code that can be executed by the browser’s processor.

Once the WebAssembly module is loaded and compiled, the browser can execute its functions and interact with its data. Wasm code can also call JavaScript functions and access browser APIs using JavaScript interop, which allows seamless communication between WebAssembly and JavaScript.

WebAssembly’s efficient execution can provide significant performance benefits for web applications, especially for computationally intensive tasks such as data processing or scientific calculations. Additionally, Wasm’s security model, which enforces strict memory isolation and control flow integrity, can improve the security of web applications and reduce the risk of security vulnerabilities.

How to Compile Code to WebAssembly

To compile code to WebAssembly, developers can use compilers that target the Wasm binary format, such as Clang or Emscripten.

Developers can also use languages that have built-in support for WebAssembly, such as Rust or AssemblyScript.

To compile code to WebAssembly, you will need a compiler that supports generating Wasm output. Here are some general steps:

  1. Choose a programming language that has a compiler capable of generating WebAssembly output. Some popular languages that support WebAssembly include C/C++, Rust and Go.
  2. Install the necessary tools for compiling code to WebAssembly. This can vary depending on the programming language and the specific compiler being used. For example, to compile C/C++ code to WebAssembly, you may need to install Emscripten, which is a toolchain for compiling C/C++ to WebAssembly.
  3. Write your code in the chosen programming language, making sure to follow any specific guidelines for WebAssembly output. For example, in C/C++, you may need to use special Emscripten-specific functions to interact with the browser environment.
  4. Use the compiler to generate WebAssembly output from your code. This will typically involve passing in command-line options or setting environment variables to specify that the output should be in Wasm format.

Optionally, optimize the WebAssembly output for performance or size. This can be done using tools such as Wasm-opt or Wasm-pack. Load the generated WebAssembly code in your application or website using JavaScript or another compatible language.

Wasm modules are typically loaded asynchronously using the fetch() API.

Once the module is loaded, it can be compiled and instantiated using the WebAssembly API.

To load and run a WebAssembly module, you first need to create an instance of the module using the WebAssembly.instantiateStreaming or WebAssembly.instantiate method in JavaScript. These methods take the URL of the WebAssembly binary file as an argument and return a Promise that resolves to a WebAssembly.Module object and a set of exported functions.

Once you have the WebAssembly.Module object and exported functions, you can call the exported functions to interact with the Wasm module. These functions can be called just like any other JavaScript function, but they execute WebAssembly code instead of JavaScript code.

Here’s an example of how to load and run a simple WebAssembly module in JavaScript:

// Load the WebAssembly module from a binary file
fetch('module.wasm')
  .then(response => response.arrayBuffer())
  .then(bytes => WebAssembly.instantiate(bytes))
  .then(module => {
    // Get the exported function from the module
    const add = module.instance.exports.add;

    // Call the function and print the result
    const result = add(1, 2);
    console.log(result);
  });


In this example, we use the fetch API to load the WebAssembly binary file as an ArrayBuffer, and then pass it to the WebAssembly.instantiate method to create an instance of the WebAssembly module.

We then get the exported function add from the instance, call it with arguments 1 and 2, and print the result to the console.

It’s important to note that WebAssembly modules run in a sandboxed environment and cannot access JavaScript variables or APIs directly.

To communicate with JavaScript, WebAssembly modules must use the WebAssembly.Memory and WebAssembly.Table objects to interact with data and function pointers that are passed back and forth between the WebAssembly and JavaScript environments.

Performance Advantages of WebAssembly

WebAssembly can improve performance compared to other web technologies in a number of ways.

First, Wasm code can be compiled ahead-of-time (AOT) or just-in-time (JIT) to improve performance. AOT compilation allows WebAssembly code to be compiled to machine code that can be executed directly by the CPU, bypassing the need for an interpreter.

JIT compilation, on the other hand, allows WebAssembly code to be compiled to machine code on the fly, at runtime, which can provide faster startup times and better performance for code that is executed frequently.

Additionally, WebAssembly can take advantage of hardware acceleration, such as SIMD (single instruction, multiple data) instructions, to further improve performance. SIMD instructions allow multiple operations to be performed simultaneously on a single processor core, which can significantly speed up mathematical and other data-intensive operations.

Here are some benchmarks and examples that illustrate the performance benefits of using WebAssembly.

Game of Life. A cellular automaton that involves updating a grid of cells based on a set of rules. The algorithm is simple, but it can be computationally intensive. The WebAssembly version of the algorithm runs about 10 times faster than the JavaScript version.

Image processing. Image processing algorithms can be highly optimized using SIMD instructions, which are available in WebAssembly. The Wasm version of an image processing algorithm can run about three times faster than the JavaScript version.

AI/machine learning. Machine learning algorithms can be highly compute-intensive, making them a good candidate for WebAssembly. TensorFlow.js is a popular JavaScript library for machine learning, but its performance can be improved by using the WebAssembly version of TensorFlow. In some benchmarks, the Wasm version runs about two times faster than the JavaScript version.

Audio processing. WebAssembly can be used to implement real-time audio processing algorithms. The Web Audio API provides a way to process audio data in the browser, and the WebAssembly version of an audio processing algorithm can run about two times faster than the JavaScript version.

Wasm Security Considerations

WebAssembly supports various security policies that allow web developers to control how their code interacts with the browser’s resources. For example, Wasm modules can be restricted from accessing certain APIs or executing certain types of instructions.

WebAssembly code runs within the browser’s sandboxed environment, which limits its access to the user’s system.

Wasm code is subject to the same-origin policy, which restricts access to resources from a different origin (i.e., domain, protocol and port). This prevents Wasm code from accessing sensitive resources or data on a website that it shouldn’t have access to.

WebAssembly also supports sandboxing through the use of a memory-safe execution environment. This means that Wasm code cannot access memory outside of its own allocated memory space, preventing buffer overflow attacks and other memory-related vulnerabilities.

Additionally, WebAssembly supports features such as trap handlers, which can intercept and handle potential security issues, and permissions, which allow a module to specify which resources it needs access to.

Furthermore, Wasm can be signed and verified using digital signatures, ensuring that the code has not been tampered with or modified during transmission or storage. WebAssembly code can also be executed in a secure execution environment, such as within a secure enclave, to further enhance its security.

Best Practices for Writing Secure Wasm Code

When writing WebAssembly code, there are several best practices that developers can follow to ensure the security of their code.

Validate inputs. As with any code, it is important to validate inputs to ensure that they are in the expected format and range. This can help prevent security vulnerabilities such as buffer overflows and integer overflows.

Use memory safely. WebAssembly provides low-level access to memory, which can be a source of vulnerabilities such as buffer overflows and use-after-free bugs. It is important to use memory safely by checking bounds, initializing variables and releasing memory when it is no longer needed.

Avoid branching on secret data. Branching on secret data can leak information through side channels such as timing attacks. To avoid this, it is best to use constant-time algorithms or to ensure that all branches take the same amount of time.

Use typed arrays. WebAssembly provides typed arrays that can be used to store and manipulate data in a type-safe manner. Using typed arrays can help prevent vulnerabilities such as buffer overflows and type confusion.

Limit access to imported functions. Imported functions can introduce vulnerabilities if they are not properly validated or if they have unintended side effects. To limit the risk, it is best to restrict access to imported functions and to validate their inputs and outputs.

Use sandboxes. To further isolate WebAssembly code from the rest of the application, it can be run in a sandboxed environment with restricted access to resources such as the file system and network. This can help prevent attackers from using WebAssembly code as a vector for attacks

Keep code minimal. Write minimal code with clear boundaries that separate untrusted and trusted code, thus reducing the attack surface area.

Avoid using system calls as much as possible. Instead, use web APIs to perform operations that require input/output or other system-related tasks.

Use cryptographic libraries. Well-known cryptographic libraries like libsodium, Bcrypt, or scrypt can help secure your data.

The post Demystifying WebAssembly: What Beginners Need to Know appeared first on The New Stack.

]]>
Case Study: A WebAssembly Failure, and Lessons Learned https://thenewstack.io/webassembly/case-study-a-webassembly-failure-and-lessons-learned/ Thu, 25 May 2023 14:00:55 +0000 https://thenewstack.io/?p=22708922

VANCOUVER — In their talk “Microservices and WASM, Are We There Yet?” at the Open Source Summit North America, Kingdon

The post Case Study: A WebAssembly Failure, and Lessons Learned appeared first on The New Stack.

]]>

VANCOUVER — In their talk “Microservices and WASM, Are We There Yet?” at the Linux Foundation’s Open Source Summit North America, Kingdon Barrett, of Weaveworks, and Will Christensen, of Defense Unicorns, said they were surprised as anyone that their talk was accepted since they were newbies who had spent about three weeks delving into this nascent technology.

And their project failed. (Barrett argued, “It only sort of failed  … We accomplished the goal of the talk!”)

But they learned a lot about what WebAssembly, or Wasm, can and cannot do.

“Wasm has largely delivered on its promise in a browser and in apps, but what about for microservices?” the pair’s talk synopsis summarized. “We didn’t know either, so we tried to build a simple project that seemed fun, and learned Wasm for microservices is not as mature and a bit more complicated than running in the browser.”

“Are we there yet? Not really. There’s some caveats,” said Christensen. “But there are a lot of things that do work, but it’s not enough that I wouldn’t bet the farm on it kind of thing.”

Finding Wasm’s Limitations

Barrett, an open source support engineer at Weaveworks, called WebAssembly “this special compiled bytecode language that works on some kind of like a virtual machine that’s very native toward JavaScript. It’s definitely shown that is significantly faster than, let’s say, JavaScript running with the JIT (just-in-time compiler).

“And when you write software to compile for it, you just need to treat it like a different target — like on x86 or Arm architectures; we can compile to a lot of different targets.”

The speakers found there are limitations or design constraints, if you will:

  • You cannot access the network in an unpermissioned way.
  • You cannot pass a string as an argument to a function.
  • You cannot access the file system unless you have specified the things that are permitted.

“There is no string type,” Barrett said. “As far as I can tell, you have to manage memory and count the bytes you’re going to pass. Make sure you don’t lose that number. That’s a little awkward, but there is a way around that as well.”

One of the big potential benefits for government contractors with Wasm is the ability to use existing code and to retain people with deep knowledge in a particular language.

The talk was part of the OpenGovCon track at the conference.

“We came up with this concept, being the government space, that I thought was going to be really interesting for an ATO perspective” — authorized to operate — “which is, how do you enable continuous delivery while still maintaining a consistent environment?” Christensen said.

The government uses ATO certification to manage risk in contractors’ networks by evaluating the security controls for new and existing systems.

One of the big potential benefits for government contractors with Wasm, Christensen said, is the ability to use existing code and to retain people with deep knowledge in a particular language.

“You can use that, tweak it a little bit and get life out of it,” he said. “You may have some performance losses where there may be some nuances, but largely you can retain a lot of that domain language or that sort of domain knowledge and carry it over for the future.”

Barrett and Christensen set out to write a Kubernetes operator.

“I wanted to write something in Go … so all your functions for this or wherever you need come in the event hooks,” Christensen said.

Then, instead of calling the state a function, or a class that you have inside of that monolithic operating design, the idea is that you can reference somehow the last value store. It could be a Redis cache, database, or object storage. Wasm is small enough that at load time, a small binary can be loaded at initialization.

If cold start times are not a problem, you could write something that will go request, pull a Wasm module, load, run and return the result.

And, Christensen continued, “if you really want to get creative, you can shove it in as a config map inside of Kubernetes and … whatever you want to do, but the biggest thing is Wasm gets pulled in. And the idea is you call it almost like a function, and you just execute it.

“And each one of those executions would be a sandbox so you can control the exposure and security and what’s exposed throughout the entire operator. … You could statically compile the entire operator and control it that way. Anyone who wants to work in the sandbox with modules, they would have the freedom within the sandbox to execute. This is the dream. … Well, it didn’t work.”

The idea was that there would be stringent controls in a sandbox about how the runtime would be exposed to the Wasm module, which would include logging and traceability for compliance.

Runtimes and Languages

WebAssembly is being hailed for its ability to compile from any language, though Andrew Cornwall, a Forrester analyst, told The New Stack that it’s easier to compile languages that do not have garbage collectors, so languages such as Java, Python and interpreted languages tend to be more difficult to run in WebAssembly than languages such as C or Rust.

Barrett and Christensen took a few runtimes and languages for (ahem) a spin. Here’s what they found:

Fermyon Spin

Runtime class has been available since Kubernetes v1.12. It’s easy to get started, light on controls. The design requires privileged access to your nodes. Containerd shims control which nodes get provisioned with the runtime.

Kwasm

“There’s a field on the deployment class called runtimeClassName, and you can set that to whatever you want, as long as containerd knows what that means. So Kwasm operator breaks into the host node and sets up some containerd configuration imports of binary from wherever — this is not production ready,” Barrett said, unless you already had separate controls around all of those knobs and know how to authorize that type of grant safely.

He added, “Anyway, this was very easy to get your Wasm modules to run directly on Kubernetes this way, despite it does require privileged access to the nodes and it’s definitely not ATO.”

WASI/WAGI

WASI (WebAssembly System Interface) provides system interfaces; WAGI (WebAssembly Gateway Interface) permits standard IO to be treated as a connection.

“Basically, you don’t have to handle connections, the runtime handles that for you,” Barrett said. “That’s how I would summarize WAGI, and WASI is the system interface that makes that possible. You have standard input, standard output, you have the ability to share memory, and functions — you can import them or export them, call them from inside or outside of the Wasm, but only in ways that you permit.”

WasmEdge

WasmEdge Runtime, based on C++, became a Cloud Native Computing Foundation project in 2021.

The speakers extolled an earlier talk at the conference by Michael Yuan, a maintainer of the project, and urged attendees to look for it.

Wasmer/Wastime

Barrett and Christensen touted the documentation on these runtime projects.

“There are a lot of language examples that are pretty much parallel to what I went through … and it started to click for me,” Barrett said. “I didn’t really understand WASI at first, but going through those examples made it pretty clear.”

They’re designed to get you thinking about low-level constructs of Wasm:

  • What is possible with a function, memory, compiler.
  • How to exercise these directly from within the host language.
  • How to separate your business logic.
  • Constraints in these environments will help you scope your project’s deliverable functions down smaller and smaller.

Wasmtime or Wasmer run examples in WAT (WebAssembly Text Format), a textual representation of the Wasm binary format, something to keep in mind when working in a language like Go. If you’re trying to figure out how to call modules in Go and it’s not working, check out Wazero, the zero-dependency WebAssembly runtime written in Go, Barrett said.

Rust

It has first-class support and the most documentation, the speakers noted.

“If you have domain knowledge of Rust already, you can start exploring right now how to use Wasm in your production workflow,” Christensen said.

Node.js/Deno

Wasm was first designed for use in web browsers. There’s a lot of information out there already about the V8 engine running code that wasn’t just JavaScript in the browser. V8 is implemented in C++ with support for JavaScript. That same V8 engine is found at the heart of NodeJS and Deno. The browser-native JavaScript runtimes in something like Node.js or Deno are what made their use with Wasm so simple.

“A lot of the websites that had the integration already with the V8 engine, so we found that from the command line from a microservices perspective was kind of really easy to implement,” Christensen said.

“So the whole concept about the strings part, about passing it with a pointer, if you’re running Node.js and Deno, you can pass strings natively and you don’t even know it’s any different. …Using Deno, it was really simple to implement. …There are a lot of examples that we’ve discovered, one of which is ‘Hello World,’ actually works. I can compile it so it actually runs and can pass a string and get a string out simply from a web assembly module with Deno.”

Christensen said that Deno or Node.js currently provides the best combination of WASM support that is production ready with a sufficient developer experience.

A Few Caveats

“But a little bit of warning when you go to compile,” Christensen said. “What we have discovered is: all WASM is not compiled the same.”

There are three compilers for Wasm:

  • Singlepass doesn’t have the fastest runtime, but has the fastest compilation.
  • Cranelift is a main engine used in Wasmer and Wasmtime. It doesn’t have the fastest runtime; it’s much better, but it’s still a bizarre compilation.
  • LLVM has the slowest compile time. No one who’s ever used LVM is surprised there, but it is the fastest runtime.

A Few Problems

Pointer functions for handling strings are problematic. String passing, specifically with Rust, even when done correctly, could decrease performance by up to 20 times, they said.

There is a significant difference between compiled and interpreted languages when compiled to a WASM target. Wasm binaries for Ruby and Python may see 20 to 50MG penalties compared to Go or Rust because of the inclusion of the interpreter.

“And specifically, just because we’re compiling Ruby or Python to Wasm, you do need to compile the entire interpreter into it,” Christensen said. “So that means if you are expecting Wasm to be better for boot times and that kind of stuff, if you’re using an interpreted language, you are basically shoving the entire interpreter into the Wasm binary and then running your code to be on the interpreter. So please take note that it’s not a uniform experience.”

“If you’re using an interpreted language, it’s still interpreted in Wasm,” Barrett said. “If you’re passing the script itself into Wasm, the interpreter is compiled in Wasm but the script is still interpreted.”

And Christensen added, “You’re restricted to the runtime restrictions of the browser itself, which means sometimes they may be single-threaded. Good, bad, just be aware.”

A web browser, Deno and Node.js all use the V8 engine, meaning they all exhibit the same limitations when running Wasm.

And language threading needs to be known at runtime for both host and module.

“One thing I’ve noticed: in Go, if I use the HTTP module to do a request from a Wasm-compiled Go module from Deno, there is no way that I can turn around and make sure that’s not gonna break the threaded nature of Deno and that V8 engine,” Christensen said.

He added, “Maybe there’s an answer there, but I didn’t find it. So if you are just getting started and you’re just trying to mess around and try to find all that happening, just know that you may spend some time there.”

And what happens when you have a C dependency with your RubyGem?

Barrett said he didn’t try that at all.

“Most Ruby dependencies are probably native Ruby, not native extensions,” he said. “They’re pure Ruby, but a ‘native extension’ is Ruby compiling C code. And then you have to deal with C code now,” in addition to Ruby.

“Of course, C compiles to Wasm, so I’m sure there is a solution for this. But I haven’t found anyone who has solved it yet.”

It applies to some Python packages as well, Christensen said.

“They [Python eggs] are using the binary modules as well, so there is definitely no way to do a [native system] binary translation into Wasm — binary to binary,” he said. “So if you need to do it, you need to get your hands dirty, compile the library itself to Wasm, then compile whatever gem or package that function calls are there.”

The speakers said that in working with Wasm, they found that ChatGPT wasn’t very helpful and that debugging can be harsh.

So, Should You Be Excited about Wasm?

“Yes. There’s plenty of reasons to be excited,” Christensen said. “It may not be ready yet, but I definitely think it’s enough to move forward and start playing around yourself.”

When Wasm is fully mature, he said, it will have benefits in terms of tech workforce retention, especially in governmental organizations: “You can take existing workforce, you don’t have to re-hire and you can get longevity out of them. Especially to have all that wonderful domain knowledge and you don’t have to re-solve the same problem using a new tool.

“If you have a lot of JavaScript stuff, [you’ll have] better control over it and it runs faster, which is the whole reason why Wasm is interesting,” Christensen said. The reason is that JavaScript compiled to Wasm is much faster, as the V8 engine no longer has to do “just-in-time” operations.

“And then finally, I’m sure a lot of you have an ARM MacBook, and then you try to deploy something to the cloud,” he said. “And next thing you realize, ‘Oh look, my entire stack is in x86.’ Well, Wasm magically does take care of this. I did test this out on a Mac Mini and ran it on a brand new AMD 64 system and Deno couldn’t tell the difference.”

WebAssembly is ready to be tested, Christensen said, and the open source community is the way to make that happen.

“Let the maintainers know; start talking about it. Bring up issues. We need more working examples. That’s missing. We can’t even get ChatGPT to give us anything decent,” he said, so the community is relying on its members to experiment with it and share their experiences.

The post Case Study: A WebAssembly Failure, and Lessons Learned appeared first on The New Stack.

]]>
New Image Trends Frontend Developers Should Support https://thenewstack.io/new-image-trends-frontend-developers-should-support/ Thu, 25 May 2023 13:00:55 +0000 https://thenewstack.io/?p=22708951

Media management firm Cloudinary is working on a plug-in that will enable developers to leverage its image capabilities from within

The post New Image Trends Frontend Developers Should Support appeared first on The New Stack.

]]>

Media management firm Cloudinary is working on a plug-in that will enable developers to leverage its image capabilities from within ChatGPT.

It’s part of keeping up with new technologies that, like AI, are changing user expectations when it comes to a frontend experience, said Tal Lev-Ami, CTO and co-founder of online media management company Cloudinary.

“If you look at e-commerce, many websites now have ways to know what you want to buy the 360 [degree] way and some of them also have integrated AR experiences where you can take whatever object it is and either see it in the room or see it on yourself,” Lev-Ami told The New Stack. “These are considerations that are becoming more critical for developers to support.”

Another thing developers should consider is how AI-enabled media manipulation will alternate the expectations of end users. He compared it to the internet’s shift from simply text to using images. Images didn’t replace text, but users suddenly expected images on web pages.

“The expectations of the end users on the quality and personalization of the media is ever increasing, because they see ads and they see more sophisticated visual experiences,” he said. “It’s not that everything before is meaningless; it’s still needed. But if you’re not there to meet the expectations of the end user in terms of experiences, then you’re getting left behind.”

Supporting 3D

There are challenges around supporting 3D, such as how to optimize images and (for instance) how to take a file developed for CAD and convert it to a media 3D format that’s supported on the web, such as glTF, an open standard file format for three-dimensional scenes and models, Lev-Ami said.

A case study with Minted, a crowdsourced art site with 59.8 million images, offers a look at what’s required to support 3D. Minted used Cloudinary to improve its image generation pipeline with support for a full set of 2D and 3D transforms and automation technology. A single product at Minted can have more than 100,000 variants, according to a case study of Minted’s Cloudinary deployment.

The case study explained how the art site worked with the media company to create a 3D shopping experience. First, the image of the scenes are created in a studio, then an internal image specialist sliced the image into layers and corrected for transparency, color and position. A script was then used to generate the coordinates needed to position these layers as named transforms into a text file (CSV), which when uploaded to Cloudinary (with the previously created screen layers) created the final image.

Separately, Minted’s proprietary pipeline ingests raw art files from artists and builds the base images for each winning design. When a customer navigates to an art category page or product details page on Minted, the page sends requests to Cloudinary for images that composite the correct combination of scenes, designs, frame and texture into the final thumbnails, the case study explained.

“For close-up product images, Minted makes use of Cloudinary’s 3D rendering capability as well as its e_distort API feature,” the case study noted. “A 3D model with texture UV mapping was created for the close-up image that shows off the texture and wrapping effect of a stretched canvas art print. With some careful tweaking of the 3D coordinates, the model is uploaded and Cloudinary does the rest, composing the art design as texture onto the model.”

Bring Your Own Algorithms

WebAssembly is another relative newcomer technology for the frontend, where it can be used to deploy streaming media, so I asked Lev-Ami if Wasm is also changing how media works on the frontend, or perhaps in how Cloudinary manages its own workload? While Cloudinary does deploy Wasm to support edge computing, the company also allows developers to upload Wasm and run their own algorithms.

“We actually have a capability where you can upload your own Wasm so that you can run your own algorithm as part of the media processing pipeline,” he said. “If you have some unique algorithm that you want to run as part of the media processing pipeline, you can do that. The safety and security around Wasm allows us to be more open as a platform and allows customers to handle use cases where they need to run their own algorithms part of the pipeline.”

Wasm has fewer security risks than code because it executes within its own sandbox, according to Andrew Cornwall, a senior analyst with Forrester who specializes in the application development space. Code compiled to WebAssembly can’t grab passwords, for instance, Cornwall recently told The New Stack.

The post New Image Trends Frontend Developers Should Support appeared first on The New Stack.

]]>
Could WebAssembly Be the Key to Decreasing Kubernetes Use? https://thenewstack.io/could-webassembly-be-the-key-to-decreasing-kubernetes-use/ Mon, 22 May 2023 13:00:06 +0000 https://thenewstack.io/?p=22708613

WebAssembly, aka Wasm, is already changing how companies deploy Kubernetes, according to Taylor Thomas, a systems engineer and director of

The post Could WebAssembly Be the Key to Decreasing Kubernetes Use? appeared first on The New Stack.

]]>

WebAssembly, aka Wasm, is already changing how companies deploy Kubernetes, according to Taylor Thomas, a systems engineer and director of customer engineering at Cosmonic. Fortune 100 companies are spinning down Kubernetes Clusters to use Wasm instead, he said.

There will always be a place for Kubernetes, he added — just perhaps not as an ad hoc development platform.

“We’ve seen so many companies in the Fortune 100 who we’ve talked to who are getting rid of Kubernetes teams and spinning down Kubernetes clusters,” Thomas told The New Stack. “It’s just so expensive. It’s so wasteful that the utilization numbers we get from most people are 25 to 35%.”

Kubernetes forces developers to care about infrastructure and they don’t necessarily want to, he added.

“Basically, developers have to care about their infrastructure much more than they need to,” he said. “A lot of these things around microservices, we did them in Kubernetes because that was a great way to do it before we had stuff like WebAssembly, but microservices and functions … all those things work better in a world where WebAssembly exists because you focus just on writing that code.”

WebAssembly, or Wasm, is a low-level byte code that can be translated to assembly. A bytecode is computer object code that an interpreter converts into binary machine code so it can be read by a computer’s hardware processor.

Cosmonic Bets on Open Source

Cosmonic is counting on Wasm winning. In April, the WebAssembly platform-as-a-service company launched its open beta and released Cosmonic Connect, a set of third-party connectors designed to simplify Wasm integration. The first Cosmonic Connect integration to launch was Cosmonic Connect Kubernetes.

“You can now connect Kubernetes clusters with a single command,” he said. “We manage all the Wasm cloud-specific bits. We have a beautiful UI you can use to see and manage these things.”

Cosmonic is also involved in furthering WebAssembly standards, including the proposed component model. With the component model, language silos could be broken down by compiling to Wasm, Thomas said. The function then becomes like Lego blocks — developers could combine functions from different languages into WebAssembly and the functions would work together, he added.

“We’ve been focusing on a common set of contracts that we’ve been using at Wasm cloud for a long time, and we’re now centralizing on in the WebAssembly community called wasi-cloud,” he said. “These things are wasi key value, wasi messaging — [if] you want to use a key-value database in 80% of the use cases, you just need the same five functions — get set, put, all these common things — and so it’s defined by an interface.”

That will allow developers to “click” code from different languages together, he said.

“That language barrier is so incredibly powerful — that really fundamentally changes how we put together applications,” Thomas said. “Because of WebAssembly being able to compile from any language, that thing you’re using could be written in Rust or C, and the thing you’re writing could be in Go or Python, and then they plug together when they actually run.”

That doesn’t just break the language barrier — it can also break down vendor barriers because now everything can be moved around, he added. Components will also liberate developers from being locked into custom software development kits (SDKs) or libraries, he said.

“It’s a walled garden and we don’t want that to be the case. We want it to be you just write against the contracts and we provide the stuff you need for our platform but you just focus on the code part of it,” he said. “That’s very different than all these other approaches where you either had to confine yourself to a specific language or a specific type of way things were set up or any of those kinds of details.”

Cosmonic also is a maintainer on the CNCF project wasmCloud and works with the Wasm cloud application deployment manager [WADM] standard. He compared WADM to running a YAML file.

“WADM gives you the ability to connect to something to use a familiar pattern,” Thomas said. “A user is able to define their application, they can say, Okay, here’s the dependencies I’m using that I’m going to link and at runtime, here’s the configuration I’m passing to it. And here’s the code I’m running. And they can specify all those things where they want to run it, and then it’ll run it everywhere for them, and then automatically reconcile if something disappears, or something moves around.”

The post Could WebAssembly Be the Key to Decreasing Kubernetes Use? appeared first on The New Stack.

]]>
Forrester on WebAssembly for Developers: Frontend to Backend https://thenewstack.io/forrester-on-webassembly-for-developers-frontend-to-backend/ Wed, 17 May 2023 13:00:11 +0000 https://thenewstack.io/?p=22708204

There are a lot of things to love about WebAssembly — but how do developers decide when to use it?

The post Forrester on WebAssembly for Developers: Frontend to Backend appeared first on The New Stack.

]]>

There are a lot of things to love about WebAssembly — but how do developers decide when to use it? Does it matter in what language you write to WebAssembly? And what about security? To learn more about what frontend developers need to know, I sat down with Andrew Cornwall, a senior analyst with Forrester who specializes in the application development space.

The good news is, functionality does not alter depending on which coding language you write in. Write in C++, AssemblyScript, Rust — it’s the developer’s choice, Cornwall said. Typically, it’s easier to compile languages that do not have garbage collectors, so languages such as Java, Python, and interpreted languages tend to be more difficult to have running in WebAssembly than languages such as C or Rust. But the end result will be WebAssembly, which he noted is best thought of as a processor rather than a language.

“Something like JavaScript or Java or Python, where there’s a whole ecosystem in there that needs to be in place before you can run,” Cornwall said.

Typically, developers will take the C implementation of Python, compile it using a compiler that outputs WebAssembly, he said. Now they have a Python interpreter that is written in WebAssembly, which they can then feed regular Python code.

“That is easier to do than converting Python to WebAssembly itself,” he added.”Once it’s in WebAssembly, it doesn’t matter. It just runs — it’s essentially very similar to machine code.”

For other supported languages, rather than compile to x86 or Arm on a compiler, developers opt for WebAssembly when compiling, he explained. The compiler outputs the byte code that will run — WebAssembly, or Wasm, is a low-level byte code that can be translated to assembly. A bytecode is computer object code that an interpreter converts into binary machine code so it can be read by a computer’s hardware processor. Essentially, WebAssembly converts code to this portable binary-code format. As such, it has more in common with machine language than anything else and that’s why it’s so gosh darn fast.

Wasm Use Cases for the Frontend

When WebAssembly first came out, it was seen primarily as a solution for frontend needs, Cornwall said. Typical use cases for the frontend include operations with a lot of matrix math and video. If you need something to start executing right away and don’t have time to wait for the JavaScript to download and parse in the browser, then WebAssembly is a great solution, he said. For instance, the BBC created a video player for its site in Wasm and Figma is written in C++ and compiled to WebAssembly, where it cut load time by three times.

“WebAssembly can be streaming so you can download it and start executing it right away,” Cornwall said. “Other than that, the other interesting use case for WebAssembly on web front ends is going to be not so much for JavaScript developers, but for developers of other things.”

That’s in part because JavaScript running through the just-in-time [JIT] compiler is actually pretty fast, he said, adding that developers can get to half native speed with JavaScript “if you let it run long enough.” For other developers, Wasm means they can write in their favorite, supported code and then compile to Wasm for the frontend.

“The interesting parts where WebAssembly gets used are essentially things where you’d go down to machine code if you were writing a program in another language,” he said. “If there is something that needs to be really fast right away, and you can’t afford to wait for the JIT to bring it into high speed by optimizing it, or if there is something you need it to start right away and you don’t want to wait for the time for the JavaScript to be parsed, for instance, so you have it in WebAssembly.”

Wasm for the Backend

Then a funny thing happened along the way to the assembly (ahem): Wasm started to become less of a frontend thing and more of a backend thing as it began to be leveraged for serverless compute, he said.

“WebAssembly VMs [virtual machines] start really fast compared to JavaScript VMs or containers,” he said. “A JavaScript VM starts in milliseconds, so 50, 100 milliseconds; WebAssembly VMs can start in microseconds. … If you’re running serverless functions, that’s great because you make a call out to the server and say, give me the result. It can then start up and give you the results really quickly, whereas other things like Javascript VM, Java VMs and containers have that startup time the cost that it takes to for them to start running before they can then do something with the values that you’re passing them and give you the result.”

That includes Kubernetes containers, he added. And there are places — serverless functions or where the web browser wants to make a request of a search function — where developers would want to use WebAssembly VMs instead of a Kubernetes container, he added.

“If you send that search request off, you’re waiting until the container comes up, runs the code the search code itself and then sends the result back. Often containers will allow multiple connections because it’s expensive to bring a container up,” he said. “So Kubernetes, it has a cost to bring the container up. With WebAssembly you don’t have as much of a cost. It’s microseconds to come up rather than milliseconds; or even if it’s a container, it could even be hundreds of milliseconds or getting close to half a second.”

Multiple that by 1000s and those milliseconds start to add up.

How Wasm Improves Security

There’s also a security risk in containers because people tend to reuse them rather than shut them down and start over. That’s not an issue with Wasm.

“Then you need to worry about how did what someone that came before me affect what the current person is requesting or what the current request is going on?” Cornwall said. “With WebAssembly it’s so cheap, you just throw it away. You can just write a serverless function, start up the VM, execute the serverless function and then throw it all the way and wait for the next request.”

Not that Wasm is a replacement for containers all the time, he cautioned. Containers are still needed and make sense when running big queries on large databases, where adding another 300 milliseconds to the query really doesn’t make much of a difference.

“Things like that will probably stay in containers because it is a little bit easier to manage a container, at least right now, than it is to manage WebAssembly serverless functions that just kind of float around in space,” he said. “WebAssembly is going to be an addition to when you need to make fast calls to serverless functions, as opposed to taking over for all containers.”

Another way Wasm is more secure than other options is that it will only execute within its sandbox — nothing goes outside of the sandbox. That’s why so far the biggest security threat seen with WebAssembly has been from websites where bitcoin miners were hidden in the Web Assembly, causing the website user to unwittingly loan CPUs for bitcoin mining. It’s not possible for code compiled into Wasm to reach and send out passwords, for instance, because the code stays within the Wasm sandbox, Cornwall explained.

The post Forrester on WebAssembly for Developers: Frontend to Backend appeared first on The New Stack.

]]>
Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ https://thenewstack.io/dev-news-dart-3-meets-wasm-flutter-3-10-and-qwik-streamable-javascript/ Sat, 13 May 2023 16:00:58 +0000 https://thenewstack.io/?p=22708063

Google released Dart 3 this week, with the big news being it is now a 100% sound null-safe language and

The post Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ appeared first on The New Stack.

]]>

Google released Dart 3 this week, with the big news being it is now a 100% sound null-safe language and the first preview of Dart to WebAssembly compilation.

“With 100% null safety in Dart, we have a sound type system,” wrote Michael Thomsen, the product manager working on Dart and Flutter. “You can trust that if a type says a value isn’t null, then it never can be null. This avoids certain classes of coding errors, such as null pointer exceptions. It also allows our compilers and runtimes to optimize code in ways it couldn’t without null safety.”

The trade-off, he acknowledged, is that migrations became a bit harder. However, 99% of the top 1000 packages on pub.dev support null safety, so Google expects the “vast majority of packages and apps that have been migrated to null safety” will work with Dart 3. For those who do experience problems using the Dart 3 SDK, there’s a Dart 3 migration guide.

Thomsen also announced a first preview of Dart to WebAssembly compilation. Flutter, which is written in Dart, already uses Wasm, he added.

“We’ve long had an interest in using Wasm to deploy Dart code too, but we’ve been blocked. Dart, like many other object-oriented languages, uses garbage collection,” he wrote. “Over the past year, we’ve collaborated with several teams across the Wasm ecosystem to add a new WasmGC feature to the WebAssembly standard. This is now near-stable in the Chromium and Firefox browsers.”

Compiling Dart to Wasm modules will help achieve high-level goals for web apps, including faster load times; better performance because Wasm modules are low-level and closer to machine code; and semantic consistency.

“For example, Dart web currently differs in how numbers are represented,” he wrote. “With Wasm modules, we’d be able to treat the web like a ‘native’ platform with semantics similar to other native targets.”

Also in Dart 3, Google added records, patterns and modifiers. The language quest for multiple return values was Dart’s fourth highest-rated issue, and by adding records, developers can “build up structured data with nice and crisp syntax,” Thomsen noted.

“In Dart, records are a general feature,” he stated. “They can be used for more than function return values. You also store them in variables, put them into a list, use them as keys in a map, or create records containing other records.”

Records simplify how you build up structured data, he continued, while not replacing using classes for more formal type hierarchies.

Patterns come into play when developers might want to break that structured data into its individual elements to work with them. Patterns shine when used in a switch statement, he explained. While Dart has had limited support for switch, in Dart 3, they’ve broadened the power and expressiveness of the switch statement.

“We now support pattern matching in these cases. We’ve removed the need for adding a break at the end of each case. We also support logical operators to combine cases,” he wrote.

Google also added class modifiers for fine-grained access control for classes.

“Unlike records and patterns that we expect every Dart developer to use, this is more of a power-user feature. It addresses the needs of Dart developers crafting large API surfaces or building enterprise-class apps,” Thomsen stated. “Class modifiers enable API authors to support only a specific set of capabilities. The defaults remain unchanged though. We want Dart to remain simple and approachable.”

Flutter v3.10 Released

Since Flutter is built on Dart, and Dart 3 launched this week, it’s not surprising that Google also launched Flutter version 3.10 at its Google I/O event Wednesday. It was buried in the slew of news announcements, but fortunately, more details were available in a blog post by Kevin Chisholm, Google’s technical program manager for Dart and Flutter.

Flutter 3.10 includes improvements to web, mobile, graphics and security. The framework now compiles with Supply Chain Levels for Software Artifacts (SLSA) Level 1, which adds more security features such:

  • Scripted build process, which now allows for automated builds on trusted build platforms;
  • Multi-party approval with audit logging, in which all executions create auditable log records; and
  • Provenance, with each release publishing links to view and verify provenance on the SDK archive.

This is also the first step toward SLA L2 and L3 compliance, which focus on protecting artifacts during and after the build process, Chisholm explained.

When it comes to the web, there are a number of new changes, including improved load times for web apps because the release reduces the file size of icon fonts and pruned unused glyphs from Material and Cupertino. Also reduced in size: the CanvasKit for all browsers, which should further improve performance.

It also now supports element embedding, which means developers can serve Flutter web apps from a specific element in a page. Previously, apps could either take up the entire page or display within an iframe tag.

The engine Impeller on iOS was tested in the 3.7 stable release, but with v3.10 it’s now set as the default renderer on iOS, which should translate into “less bank and better consistent performance,” Chisholm wrote. Actually, eliminating jank is a big part of this release: Chisholm thanks open source contributor luckysmg, who discovered that it was possible to slash the time to get the next drawable layer from the Metal drive.

“To get that bonus, you need to set the FlutterViews background color to a non-nil value,” he explained. “This change eliminates low frame rates on recent iOS 120Hz displays. In some cases, it triples the frame rate. This helped us close over half a dozen GitHub issues. This change held such significance that we backported a hotfix into the 3.7 release.”

Among the other lengthy list of improvements are the ability to decode APNG images, improved image loading APIs and support for wireless debugging.

Quick v1.0: A Full-Stack Framework with ‘Streaming JavaScript’

Qwik, a full-stack web framework, reached version 1.0 this week, with the Quick team promising a “fundamentally new approach to delivering instant apps at scale.”

The open source JavaScript framework draws inspiration from React, Cue, Angular, Svelte, SolidJS and their meta frameworks — think Next.js, Nuxt, SvelteKit — according to the post announcing the new release. Qwik promises to provide the same strengths as these frameworks while adapting for scalability.

“As web applications get large, their startup performance degrades because current frameworks send too much JavaScript to the client. Keeping the initial bundle size small is a never-ending battle that’s no fun, and we usually lose,” the Qwik team wrote. “Qwik delivers instant applications to the user. This is achieved by keeping the initial JavaScript cost constant, even as your application grows in complexity. Qwik then delivers only the JavaScript for the specific user interaction.”

The result is that the JavaScript doesn’t “overwhelm” the browser even as the app becomes larger. It’s like streaming for JavaScript, they added.

To that end, Qwik solves for instant loading time with JavaScript streaming, speculative code fetching, lazy execution, optimized rendering time and data fetching, to name a few of the benefits listed in the post.

It also incorporates ready-to-use integrations with poplar libraries and frameworks, the post noted. Qwik also includes adapters for Azure, Cloudflare, Google Cloud Run, Netlify, Node.js, Deno and Vercel.

The post Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ appeared first on The New Stack.

]]>
Our WebAssembly Experiment: Extending NGINX Agent https://thenewstack.io/our-webassembly-experiment-extending-nginx-agent/ Thu, 11 May 2023 15:21:03 +0000 https://thenewstack.io/?p=22707568

This is the second in a two-part series. Read Part 1 here. At NGINX, we’re excited about what WebAssembly (Wasm)

The post Our WebAssembly Experiment: Extending NGINX Agent appeared first on The New Stack.

]]>

This is the second in a two-part series. Read Part 1 here.

At NGINX, we’re excited about what WebAssembly (Wasm) can offer the community, especially in regard to extensibility. We’ve built a variety of products that benefit from modularity and plugins, including NGINX Open Source and NGINX Plus. This also includes open source NGINX Agent, which is a companion daemon that enables remote management of NGINX configurations, alongside collection and reporting of real-time NGINX performance and operating system metrics.

NGINX Agent is designed with modularity in mind, and it’s written in a popular and Wasm-friendly language: Go. It also uses a publish-subscribe event system to push messages to cooperating plugins. Its current stage of development, however, limits plugin creation to the Go language and static linkage.

Seeing as NGINX Agent is designed with a powerful and flexible architecture, we wondered how we could improve the developer experience by experimenting with an external plugin model (caveat: not as a roadmap item, but to evaluate the ergonomics of using Wasm in a production-grade system).

The choices available to us are wide and varied. We could directly use one of the many runtime engines in development, build some bespoke tools and bindings, or adopt one of the burgeoning plugin software development kits (SDKs) developing in the community. Two such SDKs — Extism and waPC — are compelling, active, excellent examples of the growing ecosystem surrounding Wasm outside the browser.

The Extism and waPC projects take complementary but different approaches to embedding Wasm into an application. They provide server-side SDKs to simplify runtime interfaces, loading and executing Wasm binaries, life-cycle management and server function exports, while also expanding the language set available to the programmer.

Another project, Wasmtime, provides APIs for using Wasm from Rust, C, Python, .NET, Go, BASH and Ruby. Extism has expanded on that set with OCaml, Node, Erlang/Elixir, Haskell, Zig. It also provides an extensive collection of client-side APIs, referred to as plug-in development kits (PDKs). The waPC project takes a similar approach by providing server-side and client-side SDKs to ease the interaction with the underlying runtime engine.

However, some significant differences remain between Extism and waPC. Here is a basic comparison chart:

Extism waPC
Helper APIs (e.g., memory allocation, function exists) Fewer client-side APIs (cannot access memory)
Direct runtime invocations Abstracted runtime invocations, indirect server and client APIs
Single runtime engine Multiple runtime engines
Host function exports Host function exports
Complex routing input and output system Simplified inputs and language native function output
High number of server languages Limited server language support (Rust, Go, JavaScript)
High number of client languages Limited client language support (Rust, Go, AssemblyScript, Zig)
Required C namespace code C namespace and bindings hidden behind abstraction
Early, pre-GA development releases Early, pre-GA development releases
Active Active
Smaller backing group Used by dapr with larger potential backing
Configurable state through supported APIs Durable state must be passed via custom initialization stage
Basic hash validation No bytecode custom validation
Host call user data supported Host call user data unsupported

Depending on your use cases, either Extism or waPC may be a better fit:

  • Extism supports only one runtime engine — Wasmtime; waPC supports multiple runtime engines and is more configurable.
  • Extism allows calls directly to the exported symbols from server and client sides. The waPC project builds an abstraction between the server and client sides by exporting specific call symbols and tracking user-registered functions in a lookup table.
  • Extism defers data serialization entirely to the user. The waPC project integrates with an Interface Definition Language (IDL) to automate some of the serialization or deserialization chores.

We extended NGINX Agent with both projects and used Wasmtime as the exclusive engine to keep things simple. With our candidate SDKs and runtime chosen, it was generally a straightforward process shunting in an external plugin mechanism.

Our process of extending NGINX Agent followed these stages:

  • Extended the NGINX Agent configuration semantics to define external plugins and their bytecode source.
  • Created an adapter abstraction as a concrete Go structure to shim the Go function calls to their Wasm counterparts.
  • Defined the client API (Guest) as expected client-side function exports.
  • Defined the server API (Host) as expected server-side function exports.
  • Defined data semantics for Host and Guest calls. (Wasm’s type system is strict but limited, and its memory model is a contiguous array of uninterpreted bytes, so passing complex data requires interface definitions and serialization and deserialization utilities.)
  • Finally, we wired everything together by initializing our runtime, registering our expected server API exports, loading example plugins as bytecode, validating expected client APIs, and running the mostly unchanged NGINX Agent core code.

The diagram below shows the high-level data flow for the plugin components using Extism. It differs slightly from waPC, in that waPC brings its own abstraction between the Host and Guest systems. That said, the same conclusions can be drawn. Adding an external plugin system to a new or existing one does adopt some overhead and complexity but, for that cost, our plugins can also gain significant benefits from developer choice and portability. Compared to network latency, microservice complexity, distributed race conditions, increased security surface area and the need to protect data on wire and endpoints, the tradeoff is reasonable.

In this simplified view, you can see our shunt between the NGINX Agent core executable to the Wasm “Guest” (or client) code. We used “Go Runtime” as shorthand for the NGINX Agent system and executable. NGINX Agent, having already supported plugins, provided “Plugin Interface.” Then, we built a small shim structure to shunt between Go native calls and the respective SDK calls, such as a call to Plugin. Process simply generated a call to Extism.Plugin.Call (process). The SDK (for both Extism and waPC) does the rest of the work regarding memory, Wasmtime integration and function invocation until the client-side plugin execution. As shown in the diagram, plugins can also call back to “Host” through Wasm exports, in this case allowing for plugins to also publish new messages and events.

Wasm as a Universal Backend Control and Configuration Plane for Plugin Architectures

The Wasm landscape and ecosystem is rapidly advancing. Use outside of the browser is now more than science fiction — it’s a reality with increasingly extensive options for runtime engines, SDKs, utilities, tools and documentation at the developer’s disposal. We see further improvements coming fast on the horizon. The Wasm community is actively working on the component model, along with specifications like WIT and code-generation tools like wit-bindgen defining interoperable Wasm components, server and client APIs. Standardized interfaces could become commonplace, like we experience when writing protobuf files.

Without a doubt, there are more challenges ahead. To name one: higher-order language impedance, such as “What does a server-side Go context mean to a Haskell-sourced client bytecode?” Even so, we found our limited — and experimental — exercise of embedding Wasm into pre-existing projects exciting and illuminating. We plan to do more because Wasm clearly will play a major role in the future of running applications.

In theory, many other applications with plugin architectures could benefit from a similar Wasm stack. We will continue exploring more ways we can use Wasm at NGINX in our open source projects. It’s a brave new Wasm world for the server side, and we are only starting to get a glimpse of what’s possible. As the Wasm toolchain continues to mature and compatibility issues are ironed out, Wasm appears to be a promising path toward enhancing application performance while improving developer experience.

The post Our WebAssembly Experiment: Extending NGINX Agent appeared first on The New Stack.

]]>
A Workaround to WebAssembly’s Endpoint Compatibility Issues? https://thenewstack.io/a-workaround-webassemblys-endpoint-compatibility-issues/ Mon, 08 May 2023 15:00:31 +0000 https://thenewstack.io/?p=22706863

A new WebAssembly player Loophole Labs has joined the WebAssembly module provider fold with its open source platform Scale. Most

The post A Workaround to WebAssembly’s Endpoint Compatibility Issues? appeared first on The New Stack.

]]>

A new WebAssembly player Loophole Labs has joined the WebAssembly module provider fold with its open source platform Scale. Most recently, it announced during KubeCon + CloudNativeCon support for deploying WebAssembly functions to the cloud as well as serverless environments.

Scale’s creators also say Scale’s Signature technology offers a workaround around endpoint-compatibility issues ahead of when — if ever — component modules are standardized. Eventually,  a common component standard would allow — in theory — code and applications running in a Wasm module to be deployed across different and various endpoints, including edge devices and servers. This would be done without the hassle of specifying interfaces and painstakingly reading memory across module boundaries for higher-level types, Loophole Labs says. “Better higher-level, non-serializing interfaces would allow for much less tedious configuration work, and more reusability and even less host dependence,” Trezy Who, a  principal software engineer at the company, told The New Stack during KubeCon+CloudNativeCon. But until that day happens, Loophole Labs says Scale’s Signature offers a workaround before a standard component model for deployment of Wasm modules beyond WebAssembly’s current limitations beyond the browser and backend (more about this below) is developed.

Scale Signatures help to ensure compatibility of the endpoints where applications and code are deployed within a Scale module. Signatures are used with Scale Functions to help define the inputs and outputs of a function using declarative syntax, according to Scale’s documentation.

Fast and Easy

The startup is also touting what it says are impressive benchmarks that are achieved measured by runtime performance and latency specs for the deployment of application and code within Scale WebAssembly modules.

Loophole Labs is attempting to capitalize on WebAssemby’s key concepts and strengths: For developers to be able to create applications that are loaded into a WebAssembly module and are deployed without the developer having to worry about configuring their applications or code for the Wasm module or for deployment across any environment or device that is able to process a CPU instruction set. What’s running underneath the hood should not be a concern for a developer working with a Wasm module, security features notwithstanding, since the code inside a Wasm module remains in a closed loop or in a so-called sandbox.

“Our goal is to turn Wasm into the default target development environment,” Who said.“In order to do that, we want to abstract away WebAssembly so that all anybody has to think about is if you’re going to build an application write the code and you don’t worry about the Wasm module.”

Loophole Labs’ creators say it takes about “20 seconds” to write, build and begin running a Scale module with the Scale CLI with the curl command. This means the code running in the Scale Wasm module is compiled and running locally within that 20-second time frame.

WebAssembly-powered Scale function can process hundreds of thousands of requests per second with ~30ms latencies from different endpoints worldwide, the company says. The benchmarks ran on a 48-Core Ryzen CPU with 192GB of RAM using 16KB payloads for five minutes reproducible with this GitHub repository, the company says.

The low latency specs make a good case for relying on the Scale Wasm module versus a container, in addition to the security benefits of deploying applications and code in a closed environment, Shivansh Vij, CEO and founder of Loophole Labs, told The New Stack during KubeCon+CloudNativeCon.

“Often overlooked, many people do not realize that I can ship applications anywhere in the world much faster and cheaper than it would be possible with a container,” Vij said.

While — like all Wasm module providers — Loophole Labs says Scale should eventually be polyglot to incorporate all languages that WebAssembly is designed to support, the Scale presently offers support for Go and Rust, with runtimes for Go and TypeScript.

The post A Workaround to WebAssembly’s Endpoint Compatibility Issues? appeared first on The New Stack.

]]>
IBM’s Quiet Approach to AI, Wasm and Serverless https://thenewstack.io/ibms-quiet-approach-to-ai-wasm-and-serverless/ Thu, 04 May 2023 13:00:27 +0000 https://thenewstack.io/?p=22707069

It’s been 12 years since IBM’s Watson took on Jeopardy champions and handily won. Since then, the celebrity of Watson

The post IBM’s Quiet Approach to AI, Wasm and Serverless appeared first on The New Stack.

]]>

It’s been 12 years since IBM’s Watson took on Jeopardy champions and handily won. Since then, the celebrity of Watson has been usurped by ChatGPT, but not because IBM has abandoned Watson or artificial intelligence. In fact, the company’s approach to artificial intelligence has evolved over the years and now reflects a different, more targeted path forward for AI — beyond pumping out generic large language models.

I sat down with IBM Fellow and CTO of IBM Cloud Jason McGee during KubeCon+CloudNativeCon EU, to discuss how Big Blue is approaching modern challenges such as serverless, WebAssembly in the enterprise, and of course artificial intelligence. The conversation has been edited for clarity and brevity.

Using AI for Code Automation

What is IBM doing with automation?

There [are] a lot of dimensions to automation. At the base technology level, we obviously do a lot of work with Ansible and the Red Hat side, and then we use TerraForm pretty extensively as a kind of infrastructure as code language for provisioning cloud resources and managing a lot of those reference architectures — under the covers are essentially collections of TerraForm automation that [are] configured [in] the cloud. There is also higher level work going on in automation, and that’s more like business process automation and robotic process automation, and things like that. With products like Watson Automate, [we] are applying AI and automation to customers, business processes and automating manual things. So that’s kind of higher up the stack.

We have tools [like robotic process automation and business process management] in our space, and we’re applying AI to that and then down the technology stack. We have software automation tools like TerraForm and Ansible that we’re using. We’re doing some interesting work on Ansible or the research team, with applying foundation models to help code assist on Ansible and helping people write automation using AI, to help fill in best practice code based on natural language descriptions and stuff.

What does the AI do in that context?

Think about if you’re writing an Ansible playbook, you might have a block that’s, “I want to deploy a web application on Node.js” or something. You could just like write a comment, “Create a Node.js server. running on port 80” in natural language, and it would read that comment and automatically fill in all of the code and all the Ansible commands, to provision and configure that using best practices. It’s been trained on all the Ansible Galaxy playbooks and GitHub Ansible code. So it’s like helping them write all the Ansible and write good Ansible […] based on natural descriptions of what they’re trying to achieve.

The AI is based on large language models. Do they hallucinate? I keep hearing they hallucinate and I’m reminded of the story, “Do Androids Dream of Electric Sheep?

A great question and it’s part of the example I gave you of that model [which] was trained for a more narrow purpose of doing Ansible code assist, versus something like GPT, which was like trained on everything and therefore it can be more accurate at the smaller scope, right? It understands natural language but also understands Ansible very precisely, and so it can have a higher accuracy than a general purpose large language model, which also could spit out Ansible or TerraForm, or Java or whatever the heck you wanted it to, but maybe has less awareness of how good or accurate language is.

We’re using it in AI Ops as well for incident management, availability management and property termination. That’s another kind of big space that IBM is investing a lot in — Instana, which is one of our key observability tools.

How do we help customers adopt and leverage large-scale foundations with large language models? In IBM Cloud we have this thing called the Vela cluster, which is a high-performance foundation model training cluster that’s in our cloud in Washington, DC. It was originally built for our research team so that the IBM Research Group could use it to do all their research and training on models and build things like Project Wisdom on it.

Now we’re starting to expose that for customers. We believe that enterprises will build some of their own large language models or take base models — because we’re also building a bunch of base models — and then customize them by training them on additional unique data. We’re doing work in OpenShift, to allow you to use OpenShift as the platform for that. We’re doing work in open source around that software stack for building models. And then we’re of course building a whole bunch of models.

Beyond Traditional Serverless

TNS: What else are you here promoting today at KubeCon?

McGee: There’s a lot of activity in this space that we’ve been working for a long time on, so it’s more progression. One is serverless and we have a capability called IBM Cloud Code Engine and that’s based on K data, which is like a layer on top of Kubernetes, designed to help developers consume cloud native. We’ve been doing a lot of work recently expanding that serverless notion to a more varied set of workloads.

Traditional serverless was like apps and functions running event-driven kinds of workloads — a lot of limitations on what kinds of applications you could run there. What we’ve been doing is extending that and opening up the kinds of workloads you can run, so we’re adding in things like batch processing, large-scale parallel computation, compute-intensive, simulation kind of workloads. We’re starting to do some work on HPC [high-performance computing] so people can do financial modeling or EDA [exploratory data analysis], industrial design and silicon design workloads, leveraging a serverless paradigm. We have a lot of activity going in that space.

We’re also working with a project called Ray, which is a distributed computing framework that’s being used for a lot of AI and data analytics workloads. We’ve enabled Ray to work with the Code Engine so that you can do large-scale bursts [of] compute on cloud and use it to do data analytics processing. We’ve also built a serverless Spark capability, which is another data analytics framework. All of those things are exposed in a single service in Code Engine. So instead of having seven or eight different cloud services that do all these different kinds of workloads, we have a model where we can do all that in one logical service.

What kinds of use cases are you seeing from your customers with serverless?

One of the challenges with serverless is [that] when it started a few years ago, with cloud functions and Lambda, it was positioned in a very narrow kind of way — like it was good for event-driven, it was good for kind of web frontends.

That’s interesting, but customers actually get a lot more value out of these more large-scale, compute-intensive workloads. Especially in cloud, you’d have this massive pool of resources. How do you quickly use that massive pool of resources to run a Monte Carlo simulation or to run a batch job or to run an iteration of design verification for a silicon device you’re building? When you have those large-scale workloads, the traditional way you would do that is you would build a big compute grid, and then you have a lot of costs sunk in all this infrastructure.

We’re starting to see them use serverless as the paradigm for how they run these more compute-intensive, large-scale workloads, because that combines a nice set of attributes, like the resource pool of cloud, with [a] pay-as-you-go pricing model, with a no infrastructure management. You just like simply spin up and spin back down as you run your work. So that’s the angle on serverless we’re seeing a lot more adoption on.

Wasm’s Potential

Are people using serverless on the edge?

They do. It’s more niche, of course. But you see, for example, in CDN (content delivery network), where people want to push small-scale computation out to the edge of the network, close to the end users — so I think there [are] use cases like that. At IBM Cloud, we use Cloudflare as kind of our core internet service, [with] global load balancer and edge CDN, and they support our cloud functions. You see technology like Wasm — just a lot of people here talking about Wasm. Wasm has a role to play in those scenarios.

Is IBM doing anything with Wasm? Is it useful in the enterprise?

We’re enabling some of that, we’re looking at it in the edge. We support Wasm Code Engine, it gives you a nice, super fast startup time, like workload implication in 10 milliseconds or something, because I can inject it straight in with Wasm, which is useful if you’re doing large-scale bursty things but you don’t want to pay the penalty of waiting for things to spin up.

But I still think that whole space is more exploratory. It’s not like there [are] massive piles of enterprise workloads waiting to run on Wasm, right? So it’s more next-gen edge device stuff. It’s useful — there [are] some interesting use cases around that HPC [high-performance computing] space potentially … because I can inject small fragments of code into an existing grid, but I also think it’s it’s a little more niche, specialist workloads.

CNCF paid for travel and accommodations for The New Stack to attend the KubeCon+CloudNativeConEurope 2023 conference.

The post IBM’s Quiet Approach to AI, Wasm and Serverless appeared first on The New Stack.

]]>
Wasm-Based SQL Extensions — Toward Portability and Compatibility https://thenewstack.io/wasm-based-sql-extensions-toward-portability-and-compatibility/ Mon, 01 May 2023 16:23:11 +0000 https://thenewstack.io/?p=22706741

WebAssembly (Wasm) is becoming well known for letting users run code written in different languages in the browser. But that’s

The post Wasm-Based SQL Extensions — Toward Portability and Compatibility appeared first on The New Stack.

]]>

WebAssembly (Wasm) is becoming well known for letting users run code written in different languages in the browser. But that’s not all it lets you do. Wasm’s portability, speed and security make it a great way for you to create platforms and extensible frameworks that let users compile their code to Wasm and run it in your system quickly.

Databases and other data-intensive systems are great candidates for becoming Wasm-powered platforms. When you have a lot of data, it’s cheaper to move the compute to the data than the other way around. Wasm gives us the tools to do this well, but it’s missing a few features that we can either all build on our own in a thousand incompatible ways or build together in the open.

Many SQL databases already have extensibility features that let you create new functions, aggregates, types and more. For example, in databases like PostgreSQL, each extension has an installation script written in SQL and may also include C code that is compiled to a shared library. The C code may use database APIs and implement logic that would be hard to write in procedural SQL languages.

These shared libraries don’t create a secure sandbox, so you can’t easily prevent an extension from using too many resources, corrupting memory or messing with the system. They’re also not very portable, since you have to compile them for each platform on which you run the database.

This is a natural fit for Wasm since its modules are portable, sandboxed and “capability-safe,” which means they can only access what you give them permission to. SingleStore released Wasm-powered extensibility last summer, including user-defined functions (UDFs) created from Wasm. We’re not alone either — several other products and open source projects are also working on Wasm-based extensibility.

Like other Wasm use cases, people working on SQL extensions quickly realized they need some way to pass data like strings, lists and records in and out of Wasm. The core Wasm spec doesn’t provide a way to do this and only defines things like numbers and memory as a flat array of bytes, not higher-level types.

This can lead different Wasm platforms to come up with their own Application Binary Interface (ABI), procedure call mechanism, mapping to gRPC or other solutions. These different solutions to describing high-level interfaces and types lead to a huge amount of fragmentation. This means that Wasm created for one platform can’t be used in another, and users need a different set of tools for each language for each platform, which is both inconvenient and a waste of resources to develop.

However, there is a way out of this fragmentation nightmare: the WebAssembly System Interface (WASI) and the component model. WASI is a subgroup of the WebAssembly Community Group (CG), and it’s working to define standardized interfaces for common system resources and a component model. Wasm Components are wrappers around core Wasm modules, giving us a way to statically link them together and include high-level interfaces and types in the binary.

The component model provides a general solution with a path to standardization for these high-level types and interfaces that are currently being achieved in a huge variety of bespoke ways. If we want to prevent fragmentation, reduce the amount of duplicate work done in the Wasm + SQL ecosystem, and make extensions work in a wide variety of projects and products, the component model and WASI are the answer.

That’s why SingleStoreDB is championing the WASI SQL Embedding proposal, which describes how Wasm can be embedded in SQL environments as extensions. The standard will leverage the component model and its interfaces to provide a way for users to create SQL extensions using only open source component model tools like Cargo Component and Componentize-JS.

The WASI SQL Embedding proposal is fully open source and part of the WASI subgroup. If you’re interested in being part of a more cohesive and less fragmented SQL-extension ecosystem based on Wasm, come join us.

The post Wasm-Based SQL Extensions — Toward Portability and Compatibility appeared first on The New Stack.

]]>
Will JavaScript Become the Most Popular WebAssembly Language? https://thenewstack.io/webassembly/will-javascript-become-the-most-popular-webassembly-language/ Tue, 25 Apr 2023 13:00:13 +0000 https://thenewstack.io/?p=22706212

Since it grew out of the browser, it’s easy to assume that JavaScript would be a natural fit for WebAssembly.

The post Will JavaScript Become the Most Popular WebAssembly Language? appeared first on The New Stack.

]]>

Since it grew out of the browser, it’s easy to assume that JavaScript would be a natural fit for WebAssembly. But originally, the whole point of WebAssembly was to compile other languages so that developers could interact with them in the browser from JavaScript (compilers that generate Wasm for browsers create both the Wasm module and a JavaScript shim that allows the Wasm module to access browser APIs).

Now there are multiple non-browser runtimes for server-side WebAssembly (plus Docker’s Wasm support), where Wasm modules actually run inside a JavaScript runtime (like V8), so alignment with JavaScript is still important as WebAssembly becomes more of a universal runtime.

Wasm is intentionally polyglot and it always will be; a lot of the recent focus has been on supporting languages like Rust and Go, as well as Python, Ruby and .NET. But JavaScript is also the most popular programming language in the world, and there’s significant on-going work to improve the options for using JavaScript as a language for writing modules that can be compiled to WebAssembly (in addition to the ways WebAssembly already relies on JavaScript), as well as attempts to apply the lessons learned about improving JavaScript performance to Wasm.

Developer Demand 

When Fermyon released SDKs for building components for its Spin framework using first .NET and then JavaScript and TypeScript, CEO Matt Butcher polled customers to discover what languages they wanted to be prioritized. “[We asked] what languages are you interested in? What languages are you writing in? What languages would you prefer to write in? And basically, JavaScript and TypeScript are two of the top three.” (The third language developers picked was Rust — likely because of the maturity of Rust tooling for Wasm generally — with .NET, Python and Java also proving popular.)

Suborbital saw similar reactions when it launched JavaScript support for building server-side extensions, which quickly became its most popular developer language, Butcher told us.

It wasn’t clear whether the 31% of Fermyon customers wanting JavaScript support and the 20% wanting TypeScript support were the same developers or a full half of the respondents, but the language had a definite and surprising lead. “It was surprising to us; that momentum in a community we thought would be the one to push back the most on the idea that JavaScript was necessary inside of WebAssembly is the exact community that is saying no, we really want [JavaScript] support in WebAssembly.”

Butcher had expected more competition between languages for writing WebAssembly, but the responses changed his mind. “They’re not going to compete. It’s just going to be one more place where everybody who knows JavaScript will be able to write and run JavaScript in an emerging technology. People always end up wanting JavaScript.”

“I think at this point, it’s inevitable. It’s going to not just be a WebAssembly language, but likely the number one or number two WebAssembly language very quickly.”

While Butcher pointed at Atwood’s Law (anything that can be written in JavaScript will be), director of the Bytecode Alliance Technical Steering Committee Bailey Hayes brought up Gary Bernhardt’s famous Birth and Death of JavaScript (which predicts a runtime like WebAssembly and likens JavaScript to a cockroach that can survive an apocalypse).

“Rust can be hard to learn. It’s the most loved language, but it also has a pretty steep learning curve. And if somebody’s just getting started, I would love for them to start working with what they know.” Letting developers explore a new area like WebAssembly with the tools they’re familiar with makes them more effective and makes for a better software ecosystem, Hayes suggested. “Obviously we’re all excited about JavaScript because it’s the most popular thing in the world and we want to get as many people on WebAssembly as possible!”

What Developers Want to Do in JavaScript 

Butcher put WebAssembly usage into four main groups: browser applications, cloud applications, IoT applications and plugin applications. JavaScript is relevant to all of them.

“What we have seen [at Fermyon] is [developers] using JavaScript and WebAssembly to write backends for heavily JavaScript-oriented frontends, so they’ll serve out their React app, and then they’ll use the JavaScript back end to implement the data storage or the processing.”

There are obvious advantages for server-side Wasm, Hayes pointed out. “Folks that do server-side JavaScript are going to roll straight into server-side Wasm and get something that’s even smaller and starts even faster: they’re going to see benefits without hardly any friction.”

“People are very excited about running WebAssembly outside the browser, so let’s take the most popular language in the world and make sure it works for this new use case of server-side WebAssembly.”

There were some suggestions for what else JavaScript in WebAssembly would be useful for that struck Butcher as very creative. “One person articulated an interesting in-browser reason why they want JavaScript in WebAssembly, that you can create an even more secure JavaScript sandbox and execute arbitrary untrusted code inside of WebAssembly with an interface to the browser’s version of JavaScript that prevents the untrusted JavaScript from doing things to the trusted JavaScript.”

Being able to isolate snippets of untrusted code in the Wasm sandbox is already a common use case for embedded WebAssembly: SingleStore, Scylla, Postgres, TiDB and CockroachDB have been experimenting with using Wasm for what are effectively stored procedures.

Fastly’s js-compute runtime is JavaScript running on WebAssembly for edge computing, Suborbital is focusing on plugins (where JavaScript makes a lot of sense), Shopify recently added JavaScript as a first-class language for WebAssembly functions to customize the backend, and Redpanda shipped WebAssembly support some time ago (again using JavaScript).

Redpanda’s WebAssembly module exposes a JavaScript API for writing policy on how data is stored on its Kafka-compatible streaming platform, and CEO Alex Gallego told us that’s because of both the flexibility and popularity of JavaScript with developers.

The flexibility is important for platform developers. “When you’re starting to design something new, the most difficult part is committing to a long-term API,” he noted. “Once you commit, people are going to put that code in production, and that’s it: you’re never going to remove that, you’re stuck with your bad decisions. What JavaScript allows you to do, from a framework developer perspective, is iterate on feedback from the community super-fast and change the interface relatively easily because it’s a dynamic language.”

With JavaScript, developers get a familiar programming model for business logic like masking social security numbers, finding users in specific age groups, or credit-scoring IP addresses — all without needing to be an expert in the intricacies of distributed storage and streaming pipelines. “The scalability dimensions of multithreading, vectorization instructions, IO, device handling, network throughput; all of the core gnarly things are still handled by the underlying platform.”

JavaScript: Popular and Performant

Appealing to developers is a common reason for enabling JavaScript support for writing WebAssembly modules.

When a new service launches, obviously developers won’t have experience with it; but because they know JavaScript, it’ll be much easier for them to get up to speed with what they want to do. That gives platforms a large community of potential customers, Gallego noted.

“It gives WebAssembly the largest possible programming community in the world to draw talent from!”

“WebAssembly allows you to mix and match programming languages, which is great. But in practical terms, I think JavaScript is the way to go. It’s super easy. It’s really friendly, has great packaging, there are a million tutorials for developers. And as you’re looking at expanding talent, right, which is challenging as companies grow, it’s much easier to go and hire JavaScript developers.”

“When it comes to finding the right design for the API that you want to expose, to me, leaning into the largest programming community was a pretty key decision.”

“JavaScript is one of the most widely used languages; it’s always very important because of adoption,” agreed Fastly’s Guy Bedford, who works on several projects in this space. “WebAssembly has all these benefits which apply in all the different environments where it can be deployed, because of its security properties and its performance properties and its portability. All these companies are doing these very interesting things with WebAssembly, but they want to support developers to come from these existing ecosystems.”

JavaScript has some obvious advantages, Bucher noted: “the low barrier to entry, the huge variety of readily available resources to learn it, the unbelievably gigantic number of off-the-shelf libraries that you can pull through npm.”

JavaScript could become the SQL equivalent for Wasm

Libraries are a big part of why using JavaScript with WebAssembly will be important for functionality as well as adoption. “If you’ve developed a library that’s very good at matrix multiplication, you really want to leverage the decade of developer hours that it took you to build that library.” With those advantages, JavaScript could become the SQL equivalent for Wasm, Gallego suggested.

The 20 years of optimization that JavaScript has had are also a big part of the appeal. “There’s so much money being poured into this ecosystem,” he points out. “Experts are very financially motivated to make sure that your website renders fast.” The programming team behind the V8 JavaScript engine includes the original creator of Java’s garbage collector. “The people that are focused on the performance of JavaScript are probably the best people in the world to focus on that; that’s a huge leg over from anything else.”

“I think that’s why JavaScript continues to stay relevant: it’s just the number of smart, talented people working on the language not just at the spec level, but also at the execution level.”

“Single thread performance [in JavaScript] is just fantastic,” he noted: that makes a big difference at the edge, turning the combination of WebAssembly and JavaScript into “a real viable vehicle for full-blown application development”.

Similarly, Butcher mused about the server-side rendering of React applications on a WebAssembly cloud to cater to devices that can’t run large amounts of JavaScript in the browser.

“V8 has all of these great performance optimizations,” he agreed. “Even mature languages like Python and Ruby haven’t had the same devoted attention from so many optimizers [as JavaScript] making it just a little bit faster, and just a little more faster.”

“The performance has been pretty compelling and the fact that it’s easy to take a JavaScript runtime and drop it into place… I looked at that and of course, people would want a version that would run in WebAssembly. They can keep reaping the same benefits they’ve had for so long.”

But WebAssembly isn’t quite ready for mainstream JavaScript developers today.

“JavaScript has this low barrier to entry where you don’t have to have a degree or a bunch of experience; it’s a very accessible language. But if you’re a JavaScript developer and you want to be using WebAssembly it’s not easy to know how to do that,” Bedford warned.

Different Ways to Bring JavaScript to Wasm

You can already use JavaScript to write WebAssembly modules, but “there are significant updates coming from the Bytecode Alliance over the next few months that are going to enable more JavaScript,” Cosmonic CEO Liam Randall told us.

“When we think about what the big theme for WebAssembly is going to be in 2023, it really comes down to components, components, components.”

“There have been significant advancements this year in the ability to build, create and operate components and the first two languages down the pipe are Rust and some of this JavaScript work,” Randall continued.

Currently, the most popular approach is to use the very small (210KB) QuickJs interpreter originally adopted or popularised by Shopify, which is included in a number of WebAssembly runtimes. For example, Shopify’s Javy and Fermyon’s spin-js-sdk use Quickjs with the Wasmtime runtime (which has early bindings for TypeScript but doesn’t yet include JavaScript as an officially supported language) and there’s a version of QuickJS for the CNCF’s WasmEdge runtime that supports both JavaScript in WebAssembly and calling C/C++ and Rust functions from JavaScript.

QuickJs supports the majority of ECMAScript 2020 features, including strings, arrays, objects and the methods to support them, async generators, JSON parsing, RegExps , ES modules and optional operator overloading, big decimal (BigDecimal) and big binary floating point numbers (BigFloat). So it can run most JavaScript code. As well as being small, it starts up fairly quickly and offers good performance for running JavaScript — but it doesn’t support JIT.

Using QuickJs is effectively bundling in a JavaScript runtime and there’s a tradeoff for this simplicity Hayes noted: “you typically have a little bit larger size and maybe the performance isn’t as perfect as it could be — but it works in most cases, and I’ve been seeing it get adopted all over.”

Fermyon’s JavaScript SDK builds on the way Javvy uses QuickJs but uses the Wizer pre-initializer to speed up the QuickJs startup time by saving a snapshot of what the code will look like once it’s initialized. “Wizer is what makes .NET so fast on WebAssembly,” Butcher explained. “It starts off the runtime, loads up all the runtime environment for .NET and then writes it back out to disk as a new WebAssembly module. We discovered we can do the same thing with QuickJs.”

“When you run your spin build, the SDK takes the JavaScript runtime, takes your source files, optimizes it with WIZER and then packages all of that up and ships that out a new WebAssembly binary.”

If the idea of getting a speed boost by pre-optimizing the code for an interpreted language sounds familiar, that’s because it’s the way most of the browser JavaScript engines work. “They start interpreting the JavaScript but while they’re interpreting, they feed in the JavaScript files to an optimizer so that a few milliseconds into execution, you flip over from interpreted mode into the compiled optimized mode.”

“One of the biggest untold stories is how much, at the end of the day, WebAssembly really is just everything we’ve learned from JavaScript, Java, .NET — all the pioneering languages in the 90s,” Butcher suggested. “What did we learn in 15-20 years of doing those languages and how do we make that the new baseline that we start with and then start building afresh on top of that?”

Adding JIT

Shopify also contracted Igalia to bring SpiderMonkey, the Mozilla JavaScript engine, to Wasm; while Fastly (which has a number of ex-Mozilla staff) has taken an alternative approach with compontentize-js, using SpiderMonkey to run JavaScript for WebAssembly in the high-speed mode it runs in the browser, JIT compiling at least part of your JavaScript code and running it inside the WebAssembly interpreter.

Although WebAssembly modules are portable enough to use in many different places, it’s not yet easy to compose multiple Wasm modules into a program (as opposed to writing an entire, monolithic program in one source language and then compiling that into a single module). Type support in Wasm is primitive, the different WebAssembly capabilities various modules may require are grouped into different “worlds” (like web, cloud and the CLI) and modules typically define their own local address space.

“The problem with WebAssembly has been that you get this binary, but you’ve got all these very low-level binding functions and there’s a whole lot of wiring process. You have to do that wiring specifically for every language and it’s a very complex marshaling of data in and out, so you have to really be a very experienced developer to be able to know how to handle this,” Bedford told us.

The WebAssembly component model adds dependency descriptions and high-level, language-independent interfaces for passing values and pointers. These interfaces solve what he calls “the high-level encapsulation problem with shared nothing completely separated memory spaces.”

“You don’t just have a box, you have a box with interfaces, and they can talk to each other,” he explained. “You’re able to have functions and different types of structs and object structures and you can have all of these types of data structures passing across the component boundary.”

That enables developers to create the kind of reusable modules that are common in JavaScript, Python, Rust and other languages.

Compontentize-js builds on this and allows developers to work with arbitrary bindings. “You bring your bindings and your JavaScript module that you want to run and we give you a WebAssembly binary that represents the entire JavaScript runtime and engine with those bindings. We can do that very quickly and we can generate very complex bindings.”

This doesn’t need a lot of extra build steps for WebAssembly: JavaScript developers can use familiar tooling, and install the library from npm.

Although the SpiderMonkey engine size is larger than QuickJs — Bedford estimates a binary with the JavaScript runtime and a developer’s JavaScript module will be 5-6MB — that’s still small enough to initialize quickly, even on the kind of hardware that will be available at the edge (where Fastly’s platform runs).

Again, this uses Wizer to optimize initialization performance, because that affects the cold start time. “We pre-initialize all of the JavaScript up until right before the point where it’s going to call your function, so there’s no JavaScript engine initialization happening. Everything is already pre-initialized using Wizer.”

“You’re just calling the code that you need to call so there’s not a whole lot of overhead.”

That isn’t AOT (Ahead Of Time) compilation, but later this year and next year, componentize-js will have more advanced runtime optimizations using partial evaluation techniques that Bedford suggested will effectively deliver AOT. “Because you know which functions are bound you can partially evaluate the interpreter using Futamura projections and get the compiled version of those functions as a natural process of partially evaluating the interpreter in SpiderMonkey itself.”

Compontentize-js is part of a larger effort from the Bytecode Alliance called jco — JavaScript components tooling for WebAssembly — an experimental JavaScript component toolchain that isn’t specific to the JavaScript runtime Fastly uses for its own edge offering. “The idea was to build a more generic tool, so wherever you’re putting WebAssembly and you want to allow people to write a small bit of JavaScript, you can do it,” Bedford explained.

Jco is a project “where you can see the new JavaScript experience from stem to stern”, Randall noted, suggesting that you can expect to see more mature versions of the JavaScript and Rust component work for the next release of wasmtime, which will be aligned with WASI Preview2. It’s important to note that this is all still experimental — there hasn’t been a full release of the WebAssembly component model yet and Bedford refers to componentize-js as research rather than pre-release software: “this is a first step to bring this stuff to developers who want to be on the bleeding edge exploring this”.

The experimental slightjs is also targeting the WebAssembly component model, by creating the Wasm Interface Types (WIT) bindings that lets packages share types and definitions for JavaScript. So far the wit-bindgen generator (which creates language bindings for programs developers want to compile to WebAssembly and use with the component mode) only supports compiled languages — C/C++, Rust, Java and TinyGo — so adding an interpreted language like JavaScript may be challenging.

While spin-js-sdk produces bindings specifically for Spin HTTP triggers, SlightJs aims to create bindings for any WIT interface a developer wants to use. Eventually, it will be part of Microsoft’s SpiderLightning project, which provides WIT interfaces for features developers need when building cloud native applications, adding JavaScript support to the slight CLI for running Wasm applications that use SpiderLightning.

Currently, SlightJS uses QuickJs because the performance is better, but as the improvements to SpiderMonkey arrive it could switch and Butcher pointed out the possible performance advantages of a JIT-style JavaScript runtime. QuickJs itself has largely replaced an earlier embeddable JavaScript engine, Duktape.

“There’s a real explosion of activity,” Bedford told us: “there’s very much a sense of accelerating development momentum in this space at the moment.”

Improving JavaScript and Wasm Together

You can think of these options as “JavaScript script on top and WebAssembly on the bottom,” suggested Daniel Ehrenberg, vice president of the TC39 ECMAScript working group, but another approach is “JavaScript and WebAssembly side by side with the JavaScript VM beneath it”.

The latter is where Bloomberg and Igalia have been focusing, with proposals aimed at enabling efficient interaction between JavaScript and WebAssembly, like reference-typed strings to make it easier for WebAssembly programs to create an consume JavaScript strings, and WebAssembly GC for garbage collection to simplify memory management.

Making strings work better between the two languages is about efficiency, TC39 co-chair and head of Bloomberg’s JavaScript Infrastructure and Tooling team Rob Palmer explained.

“This unlocks a lot of use cases for smaller scale use of WebAssembly [for] speeding up some small amount of computation.”

“At the moment they cannot currently really be efficient, because the overhead of copying strings in between the two domains outweighs the benefit of higher speed processing within WebAssembly.”

GC goes beyond the weak references and finalization registry additions to JavaScript (in ECMAScript 2021), which provide what Ehrenberg calls a bare minimum of interoperability between WebAssembly’s linear memory and JavaScript heap-based memory, allowing some Wasm programs to be compiled. The GC proposal is more comprehensive. “WebAssembly doesn’t just have linear memory; WebAssembly can also allocate several different garbage-collection-allocated objects that all point to each other and have completely automatic memory management,” Ehrenberg explains. “You just have the reference tracing and when something’s dead, it goes away.”

Work on supporting threads in WASI to improve performance through parallelization and give access to existing libraries is at an even earlier stage (it’s initially only for C and it isn’t clear how it will work with the component model) but these two WebAssembly proposals are fairly well developed and he expects to see them in browsers soon, where they will help a range of developers.

“Partly that’s been enabling people to compile languages like Kotlin to WebAssembly and have that be more efficient than it would be if it were just directly with its own memory allocation, but it also enables zero-copy memory sharing between JavaScript and WebAssembly in this side-by-side architecture.”

For server-side JavaScript, Ehrenberg is encouraged by early signs of better alignment between two approaches that initially seemed to be pulling in different directions: WinterCG APIs (designed to enable web capabilities in server-side environments) and WASI, which aims to offer stronger IO capabilities in WebAssembly.

“You want WinterCG APIs to work in Deno but you also want them to work in Shopify’s JavaScript environment and Fastly’s JavaScript environment that are implemented on top of WebAssembly using WASI,” he pointed out. “Now that people are implementing JavaScript on top of WebAssembly, they’re looking at can JavaScript support the WinterCG APIs and then can those WinterCG APIs be implemented in WASI?”

The Promise of Multilanguage Wasm 

The flexibility of JavaScript makes it a good way to explore the componentization and composability that gives the WebAssembly component model so much promise, embryonic as it is today.

Along with Rust, JavaScript will be the first language to take advantage of a modular WebAssembly experience that Randall predicted will come to all languages, allowing developers to essentially mix and match components from multiple WebAssembly worlds in different languages and put them together to create new applications.

“You could use high performance and secure Rust to build cloud components, much like wasmCloud does, and you could pair that with less complicated to write user-facing code in JavaScript. I could take JavaScript components from different worlds and marry them together and I could take cargo components written in Rust, and I can now recompose those in many different ways.”

“You can have Rust talking to JavaScript and you can be running it in the sandbox or you could have a JavaScript component that’s alerting a highly optimized Rust component to do some heavy lifting, but you’re writing the high-level component that’s your edge service in JavaScript,” agreed Bedford.

The way compontentize-js lets you take JavaScript and bundle it as a WebAssembly component will translate to working in multiple languages with the Jco toolchain and equivalent tools like cargo-component that also rely on the component model.

Despite WebAssembly’s support for multiple languages, using them together today is hard.

“You have to hope that someone’s going to go and take that Rust application and write some JavaScript — write the JavaScript bindgen for it and then maintain that bindgen,” Beford explained. “Whereas with the component model, they don’t even need to think about targeting JavaScript in particular; they can target the component model, making this available to any number of languages and then you as a JavaScript developer just go for it.”

“That’s what the component model brings to these workflows. Someone can write their component in Rust and you can very easily bring it into a JavaScript environment. And then [for environments] outside the browser you can now bring JavaScript developers along.”

That will also open up JavaScript components for Rust developers, he noted. “Jco is a JavaScript component toolchain that supports both creating JavaScript components and running components in JavaScript.”

In the future, the wasm-compose library “that lets you take two components and basically smoosh them together” could help with this, Hayes suggested. As the component model becomes available over the next few years, it will make WebAssembly a very interesting place to explore.

“If you support JavaScript and Rust, you’ve just combined two massive language ecosystems that people love, and now they can interop and let people just pick the best library or tool.”

“I’m so excited about WebAssembly components because, in theory, it should break down the silos that we’ve created between frontend and backend engineers and language ecosystems.”

The post Will JavaScript Become the Most Popular WebAssembly Language? appeared first on The New Stack.

]]>
WebAssembly for the Server Side: A New Way to NGINX https://thenewstack.io/webassembly-for-the-server-side-a-new-way-to-nginx/ Fri, 21 Apr 2023 18:11:42 +0000 https://thenewstack.io/?p=22705788

This is the first of a two-part series. The meteoric rise of WebAssembly (Wasm) started because it’s a language-agnostic runtime

The post WebAssembly for the Server Side: A New Way to NGINX appeared first on The New Stack.

]]>

This is the first of a two-part series.

The meteoric rise of WebAssembly (Wasm) started because it’s a language-agnostic runtime environment for the browser that enables safe and fast execution of languages other than JavaScript. Although Wasm’s initial focus was in the browser, developers have begun to explore the possibilities of Wasm on the backend, where it opens many possibilities for server and network management.

Similar to NGINX , many server-side technologies operate with a standard plugin model, which relies on statically or dynamically injecting linked object files into an executable running in the same address space.

However, plugins have considerable limitations. In particular, they allow extensibility through native language extensions, which limits developer choice in terms of languages and language-specific capabilities. Other plugins must conform to complex linking methods that require both server and client languages to support the same functionality interface. This can add complexity for creators of plugins.

Finally, some plugins work through dynamic languages and scripting layers. These are easier to use but sacrifice performance. Dynamic scripting can introduce layers of abstraction as well as additional security risk. For example, remote procedure calls (RPCs) must address network communication, serialization and deserialization, error handling, asynchronous behavior, multiplatform compatibility, and latency when those challenges cause problems. While a plugin that uses RPCs is flexible, it’s at the cost of greatly increased complexity.

Why Wasm Rocks: Fast, Secure, Flexible

So, what is this Wasm thing? Wasm is a binary format and runtime environment for executing code. In short, Wasm was created as a low-level, efficient and secure way to run code at near-native speeds. Wasm code is designed to be compiled from high-level programming languages such as C, C++, Golang and Rust. In reality, Wasm is language-agnostic and portable. This is becoming more important as developers who deploy and maintain applications increasingly prefer to write as much as possible in a single language (in other words, less YAML).

Wasm blows the standard plugin model wide open by allowing for far more flexible and manageable plugins. With Wasm, making plugins language-neutral, hardware-neutral, modular and isolated is much easier than with existing plugin models. This enables developers to customize behaviors beyond the browser, specific to their environment and use cases, in the language of their choice.

Wasm achieves all this while maintaining near-native code levels of performance thanks to:

  • A compact binary format smaller than equivalent human-readable code, resulting in faster download and parse times.
  • An instruction set that is closer to native machine instructions, allowing for faster interpretation and compilation to native code.
  • An extremely fast JIT with strong typing that delivers better optimization opportunities for faster code generation and execution through application of a variety of optimization techniques.
  • A contiguous, resizable linear memory model that simplifies memory management, allowing for more efficient memory access patterns.
  • Concurrency and parallel execution that unlocks performance from multicore processors (currently a WIP).

Designed initially for running untrusted code on the web, Wasm has a particularly strong security model that includes:

  • A sandboxed code execution environment that limits its access to system resources and ensures that it cannot interfere with other processes nor the operating system.
  • A “memory-safe” architecture that helps prevent common security vulnerabilities such as buffer overflows.
  • A robust typing system that enforces strict typing rules.
  • Small code size compared to other runtimes, which reduces the attack surface.
  • A bytecode format that is designed to be easy to analyze and optimize, which makes it easier to detect and fix potential security vulnerabilities.
  • Minimal need to refactor code for different platforms because of its high degree of portability.

A More Flexible Way to Build Plugins

Server-side Wasm has a number of impressive potential benefits, both primary and secondary. To start, using Wasm environments can make it much easier for standard application developers to interact with backend systems. Wasm also allows anyone to set up granular guardrails for what a function can and cannot do when it attempts to interact with the lower-level functionality of a networking or server-side application. That’s important because backend systems may be interacting with sensitive data or require higher levels of trust.

Similarly, server systems can be configured or designed to limit interaction with the Wasm plugin environment by explicitly exporting only limited functionality or only providing specific file descriptors for communication. For example, every Wasm bytecode binary has an imports section. Each import must be satisfied before instantiation. This allows a host system to register (or export in Wasm parlance) specific functions to interact with as a system.

Runtime engines will prevent instantiation of the Wasm module when those imports are not satisfied, giving host systems the ability to guardrail, control, validate and restrict what interaction the client has with the environment.

With more traditional plugin models and compiler technologies, creating this granularity and utility level is a challenge. The high degree of difficulty discourages developers from making plugins, further limiting choice. Perhaps most importantly, role-based access control and attribute-based access control, and other authorization and access control technologies, can introduce complex external systems that must be synchronized with the plugin as well as the underlying server-side technology. In contrast, Wasm access control capabilities are often built directly into the runtime engines, reducing the complexities and simplifying the development process.

Looking Ahead to the Great Wasm Future

In a future sprinkled with Wasm pixie dust, developers will be able to more easily design bespoke or semi-custom configurations and business logic for their applications. Additionally, they’ll be able to apply that to the server side to remove much of the development friction between backend, middle and frontend.

A Wasm-based plugin future could mean many cool things: easier and finer tuning of application performance, specific scaling and policy triggers based on application-level metrics and more.

With warg.io, we’re already seeing how Wasm might fuel innovative, composable approaches to building capabilities that apply the existing package management and registry approach to building with trusted Wasm code elements. In other words, Wasm might give us composable plugins that are not that different from the way a developer might put together several npm modules to achieve a specific functionality profile.

Application developers and DevOps teams generally have had blunt instruments to improve application performance. When latency issues or other problems arise, they have a few choices:

  1. Throw more compute at the problem.
  2. Increase memory (and, indirectly, I/O).
  3. Go into the code and try to identify the sources of latency.

The first two can be expensive. The last is incredibly laborious. With Wasm, developers can elect to run large parts of apps or functions that are slowing down performance inside a Wasm construct, and use a faster language or construct. They can do this without having to rip out the whole application and can focus on low-hanging fruit (for example, replacing slow JavaScript code used for calculations with C code or Go code compiled inside Wasm).

In fact, Wasm has a host of performance advantages over JavaScript. To paraphrase Lin Clark from Mozilla on the original Wasm team:

  • It’s faster to fetch Wasm, as it is more compact than JavaScript, even when compressed.
  • Decoding Wasm is faster than parsing JavaScript.
  • Because Wasm is closer to machine code than JavaScript, and already has gone through optimization on the server side, compiling and optimizing takes less time.
  • Code execution runs faster because there are fewer compiler tricks and gotchas necessary for the developer to know in order to write consistently performant code. Plus, Wasm’s set of instructions is more ideal for machines.

So let’s imagine this future: Microservices aren’t choreographing through expensive Kubernetes API server calls or internal east-west RPCs, but instead through modular, safe and highly performant Wasm components bounded within a smaller process space and surface area.

Traditionally, developers have used other data encoding languages like YAML to invoke custom resource definitions (CRDs) and other ways to add functionality to their applications running as microservices in Kubernetes. This adds overhead and complexity, making performance tuning more challenging. With a Wasm-based plugin, developers can take advantage of language primitives (Go, Rust, C++) that are well known and trusted rather than reinventing the wheel with more CRDs.

The post WebAssembly for the Server Side: A New Way to NGINX appeared first on The New Stack.

]]>
Fermyon Cloud: Save Your WebAssembly Serverless Data Locally https://thenewstack.io/fermyon-cloud-save-your-webassembly-serverless-data-locally/ Thu, 20 Apr 2023 20:22:25 +0000 https://thenewstack.io/?p=22705714

Fermyon Technologies has added local stateful storage capacity for Fermyon Cloud as well as Spin 1.1, as the WebAssembly startup

The post Fermyon Cloud: Save Your WebAssembly Serverless Data Locally appeared first on The New Stack.

]]>

Fermyon Technologies has added local stateful storage capacity for Fermyon Cloud as well as Spin 1.1, as the WebAssembly startup seeks to improve the developer experience for Wasm.

With the introduction of the Fermyon Cloud Key Value Store, users can now persist non-relational data in a key/value store managed by Fermyon Cloud that remains available for your serverless application. This availability of the data is measured in milliseconds — with no cold starts as the company says — given the low latency that WebAssembly offers for data connections. The Fermyon Cloud Key Value Store is an implementation of Spin’s key/value API, which means you can deploy Spin apps that use key/value data to Fermyon Cloud without changing anything about your application, the company says.

“When designing Fermyon Cloud, we wanted to retain certain stateful mechanisms because there are things that are going to start up, run to completion and stop. So while stateless is really a prerequisite to be able to scale for that, we wanted to give the developer the feeling that they didn’t have to start wiring up their own extra storage service,” Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack during the first day of KubeCon + CloudNativeCon. “Now that Key Value Store is released inside of Fermyon Cloud, the developer is really just making what appears to be regular API calls to store data. It’s deploying into a highly scalable, replicated environment.”

In a blog post, Fermyon communicated the following pain points that Fermyon Cloud users have experience, which includes:

  • Having to manage external stateful data services to use from Spin apps introduces additional infrastructure and operational overhead.
  • Changes in configuration and code between environments often introduce friction between local development and deploying to production.

So, previously, users working with serverless workloads had to only rely on external services to persist state beyond the lifespan of a single request while Spin lets you use databases you manage (like Redis, PostgreSQL or MySQL).

“So it really feels a lot of this was based on the idea that we want to remove developer friction all along the pipeline, by trying to figure out what are frustrating points for the developer,” Butcher said. “For example, without Fermyon Cloud Key, the developer might have to stand up a local copy of Redis and install it and keep it running. Instead, this step is thus removed by using Fermyon Cloud Key Value Store to allow this to happen.”

Fermyon Technologies offers key-value storage for serverless functions with free 1,000 database records at 1MB each. Spin, the popular open source product that is the easiest way for developers to build WebAssembly serverless apps, added local key-value storage in version 1.0 and now developers can instantly utilize key-value capability in a serverless runtime on Fermyon Cloud, which is also free.

Under the Hood

The Fermyon Cloud Key Value Store is an implementation of Spin’s key/value API, which means you can deploy Spin apps that use key/value data to Fermyon Cloud without changing anything about the application, Fermyon said. The final command once setup is completed is very simple:

The latest feature added is also in support of WebAssembly’s adoption in general, for which Butcher said momentum continues to build. “There has been a growing general awareness of what WebAssembly is, what it can do and what its strengths are,” Butcher said. “We were talking at the beginning of 2023 about how it’s likely that WebAssembly becomes mainstream this year. We’re definitely seeing evidence of that happening already.”

Check back often this week for all things KubeCon+CloudNativeCon Europe 2023. The New Stack will be your eyes and ears on the ground in Amsterdam!

The post Fermyon Cloud: Save Your WebAssembly Serverless Data Locally appeared first on The New Stack.

]]>