TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Services / Large Language Models

OpenAI Chats about Scaling LLMs at Anyscale’s Ray Summit

At Ray Summit, Anyscale CEO Robert Nishihara talked AI scaling with John Schulman, one of the cofounders of OpenAI and a creator of ChatGPT.
Sep 19th, 2023 8:58am by
Featued image for: OpenAI Chats about Scaling LLMs at Anyscale’s Ray Summit

This week at Anyscale’s Ray Summit, a conference focused on LLMs and generative AI for developers, attention was turned to the business of scaling.

Robert Nishihara, the co-founder and CEO of Anyscale, opened the Ray Summit by warning that the LLM era was about to get even more complex and data-intensive than it already is. “Soon we’ll all be using multimodal models, working not just with text data, but also video and image data,” he said. “It’s going to become far more data intensive. On the hardware front, the variety of accelerators that we need to support will grow. On the application front, applications are becoming far more complex.”

Funnily enough, Anyscale has just the product to deal with this new layer of complexity. It already runs the open source platform, Ray, a distributed machine learning framework being used by OpenAI, Uber and others. But now it’s launching Anyscale Endpoints, which lets developers integrate, fine-tune and deploy open source LLMs at scale.

“This is an LLM API, an LLM inference API — like the OpenAI API, but for open models like Llama 2,” said Nishihara about Endpoints.

The cost for this will be $1 per million tokens. “That is the price point for the 70 billion parameter Llama model and that is the lowest price point on the market,” he claimed.

Anyscale Endpoints

Anyscale Endpoints

Endpoints includes the ability to fine-tune an LLM, however for further customization customers will need to upgrade to the full Anyscale AI Application Platform, which the company says gives them “the ability to fully customize an LLM, and have fine-grained control over their data and models and end-to-end app architecture as well as deploy multiple AI applications on the same infrastructure.”

Still, being able to fine-tune an LLM via API is very useful for any application that doesn’t require massive scale.

Also announced was Anyscale Private Endpoints, which enables customers to run the service inside their own cloud.

A Sit down with OpenAI Co-Founder John Schulman

As well as the product announcements, Nishihara sat down with John Schulman, one of the founders of OpenAI and a creator of ChatGPT. After some initial chitchat, Nishihara brought up the issue of scale for OpenAI. “Where did that belief in the importance of scaling models and compute come from?” he asked.

“The founding team of OpenAI […] leans more towards this aesthetic of scale up simple things, rather than trying to build some complicated clever thing,” Schulman replied. He then made the point that scaling in machine learning is more complicated than many people realize.

“There [are] usually all these little details, like you have to scale your learning rates just right — otherwise, you get worse results with big models — and you have to scale your data up along with the model size. So I’d say that it took several years to figure out what were the right recipes for scaling things.”

John Schulman, one of the cofounders of OpenAI and a creator of ChatGPT.

John Schulman, one of the co-founders of OpenAI and a creator of ChatGPT.

To tease out more about OpenAI’s approach to scaling, Nishihara asked, “What’s stopping [you] from using, you know, 70 trillion parameter models today, or even bigger?”

“It’s about compute efficiency,” Schulman replied. “So now we know you can train a small model for really long, or a big model for short, and there’s some trade-off — and it turns out that somewhere in the middle, you get the best compute efficiency.”

Schulman noted that this is likely to change, but for now, a 70 trillion parameter model isn’t optimal.

Nishihara later brought up that OpenAI is “pushing the limits at OpenAI of scale, in a lot of different dimensions” and asked Schulman about its infrastructure. Obviously, it was a leading question, since OpenAI uses Anyscale’s Ray system to do distributed computing. Even so, it was interesting to hear further details about how OpenAI operates.

“We have a library for doing distributed training and it does model parallelism,” Schulman explained. “So you’re sending around weights and gradients and activations, and […] we use Ray as a big part of that for, for doing all the communication.”

To end the discussion, Nishihara made an interesting observation about the state of AI a decade ago. “Looking back a decade ago, problems like unsupervised learning were not that well understood,” he said. “Or perhaps we didn’t know how to conceptualize the problem.” He asked Schulman what are the problems today “that we’re still figuring out how to formulate?”

Schulman first mentioned “data accuracy,” a nod to the hallucination problem that everyone talks about with LLMs. But then he offered a more nuanced view.

“So there’s this problem of how […] do you supervise a model that’s kind of superhuman,” he said, adding that “sometimes this is called scalable oversight or scalable supervision.”

Ultimately, he continued, the supervision issue boils down to how to make sure LLMs are doing what humans want. However, in this case, “some of the problems haven’t even been formulated precisely yet.”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.