OpenAI and Generative AI Developments


  • Doubtful of claim that they don’t need data, they still need more data for training
  • Multinationals may start using OpenAI models
  • OpenAI is looking for data partnerships, but currently only able to cater to proposals from big tech companies
  • No list of data partners available
  • OpenAI’s platform API data policy is public information
  • OpenAI is small and has exhausted internet and academic datasets for training their models
  • Replit Codegen model is better than OpenAI Codex in many human eval tasks and is smaller in size
  • OpenAI’s models are a utility like EC2 and have greatly improved the UX of consuming models
  • Many NLP models now work with OpenAI’s transformer library
  • Stanford NLP is highly regarded
  • Qualcomm has made progress in ML on edge
  • Palantir has launched ChatGPT for war
  • Anduril may come up with something similar
  • OpenAI’s tech use cases range from disaster relief to drone warfare

Music and Audio Generation:

  • Discussion on music generation, audio generation, jingle generation, song generation, and new AI instruments
  • Shared interest in the topic

Generative AI:

  • Accel’s slide deck on opportunities in generative AI and finetuning vs prompting
  • Replit Demo Day videos soon to be on YouTube
  • Dedicated group for music, images, and video on WhatsApp
  • Expert talk on music generation requested
  • is an internal tool for now
  • Recommendation for paperspace for GPU access
  • Google Colab is $12 for 100 compute units
  • Runpod, lambdalabs, and fluidstack recommended for serverless GPU access
  • Compute units on Google Colab are unclear
  • T4 24GB GPU offered on Google Colab for $12
  • Weaviate discussed in relation to storing metadata for vector DBs



The description and link can be mismatched because of extraction errors.

  • - A tweet discussing the development of a project that was apparently built in under 2 weeks, and mentioning the differences between GPT3.5-turbo, GPT4, and vanilla LLM.
  • The URL is a Reddit post in the r/StableDiffusion subreddit discussing Google researchers achieving performance breakthroughs in machine learning. The message in the same link asks if anyone is working on ML on edge.
  • The message expresses excitement about Qualcomm’s demonstration of stable diffusion on Android, which is showcased in the given URL:
  • - PSA: Dedicated group for music, images, video. The message also discusses testing Chinchilla limits and training for more tokens than most people have tried for similarly sized models. An expert talk by the person mentioned would be helpful.
  • - A message about the availability of an API for the gpt-4 multimodal model.
  • or can be used instead of Photoroom as it doesn’t have image understanding.
  • or can be used instead of Photoroom as it doesn’t have image understanding.
  • The URL is mentioned and it is described as “awesome” but it is currently internal. There is no mention of any image understanding.
  • The LinkedIn post discusses using AI models to generate a background for a photoshoot and includes a link to a pizza commercial created using these tools. The post also mentions someone playing with music generation. (URL:
  • - suggested as an option for GPU usage in experimentation with StableDiffusion and running Gradio/Automatic1111, as well as storage needs for models like Lyriel/Deliberate+Controlnets, with a note that the 5GB space on Gradient may not be sufficient.
  • The URL is mentioned in the context of connecting a GCE VM to Colab for persistent sessions and dedicated compute. The message also recommends using Runpod and stopping the instance when not in use to only be charged for storage.
  • - This link provides context to the statement that “For non persistent / spot instances of GPUs GOOG was always in under supply while we were testing” mentioned in the same message.
  • A question on Weaviate usage, asking whether metadata is stored in Weaviate or a different database.
  • and are mentioned in the context of discussing containerization and embedding content in an app.
  • Link to a YouTube video, context unknown.
  • Link to a GitHub repository for Kandinsky-2, context unknown.