Fast, distributed, secure AI for Big Data

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • BigDL

    Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • PyTorch Library for Running LLM on Intel CPU and GPU

    1 project | news.ycombinator.com | 3 Apr 2024
  • BigDL-LLM: running LLM on your laptop using INT4

    1 project | news.ycombinator.com | 3 Jul 2023
  • Help Needed: Converting PlantNet-300k Pretrained Model Weights from Tar to h5 Format Help

    1 project | /r/learnpython | 9 Jun 2023
  • Can You Achieve GPU Performance When Running CNNs on a CPU?

    1 project | /r/computervision | 8 Apr 2023
  • [D] DeepSparse: 1,000X CPU Performance Boost & 92% Power Reduction with Sparsified Models in MLPerf™ Inference v3.0

    1 project | /r/deeplearning | 7 Apr 2023