Sebastian PetrusSomeone Has Made an Uncensored Version of QwQ-32B-Preview, And It Is AwesomeQwQ-32B-Preview is Awesome, and more powerful that GPT-4o-mini, but It Is Censored. Not Any More.Dec 3, 20242Dec 3, 20242
InArtificial CornerbyFabio Matricardi10 things to know before starting to work with Open source LLM — part 1Learn the basics to start working with Open Source Large Language Models, and leverage the power of your own private AIAug 25, 2023Aug 25, 2023
Thomas ReidOLLAMA & Hugging Face: 1000s of Models, One Powerful AI PlatformHarness the Power of Diverse Models for Smarter SolutionsOct 28, 20241Oct 28, 20241
InTDS ArchivebyRobert CorwinRunning Large Language Models PrivatelyA comparison of frameworks, models, and costsOct 30, 20242Oct 30, 20242
Gary A. StaffordLocal Inference with Meta’s Latest Llama 3.2 LLMs Using Ollama, LangChain, and StreamlitMeta’s latest Llama 3.2 1B and 3B models are available from Ollama. Learn how to install and interact with these models locally using…Sep 27, 20243Sep 27, 20243
InThe Thought CollectionbyPinkHatHackerThe Gemma 2 Advantage: How This AI Is Outpacing ChatGPT-4o and the RestSource GoogleJun 28, 20244Jun 28, 20244
InData Science in your pocketbyMehul GuptaLlama 3.2 Vision, the new multi-modal LLM by MetaHow to use Llama 3.2 and new features explainedSep 26, 20242Sep 26, 20242
InAI AdvancesbyFabio MatricardiQwen2.5 1.5b: the future of Mobile AI?Local Testing and Evaluation of Alibaba Cloud’s Latest LLM. With llama-cpp-python and a DIY prompt catalog.Oct 2, 20241Oct 2, 20241
Amos GyamfiThe 6 Best LLM Tools To Run Models Locally(Updated 02/02/2025) You can experiment with LLMs locally using GUI-based tools like LM Studio or the command line with Ollama. Continue…Aug 28, 202434Aug 28, 202434
InTowards AIbyRaghunaathanLLM Finetuning StrategiesUnlocking Precision: Tailor Your LLM to Perfectly Fit Your Needs!Sep 24, 20241Sep 24, 20241
InAI AdvancesbyGavin LiBreakthrough: Running the New King of Open-Source LLMs QWen2.5 on an Ancient 4GB GPUNew King of Open-Source LLM: QWen 2.5 72BSep 21, 202415Sep 21, 202415
InTDS ArchivebyGuillaume WeingertnerHow to Easily Set Up a Neat User Interface for Your Local LLMA step-by-step guide to run Llama3 locally with Open WebUIAug 28, 2024Aug 28, 2024
InILLUMINATIONbyDr. Walid SoulaList of Different Ways to Run LLMs LocallyIn this article we will see different ways to run any LLMs locally, Pin this article so you can test everything or go back when needed.Mar 26, 20246Mar 26, 20246
Fabio MatricardiBattle of the Prompts: Unveiling the True Capabilities of Open Source Language ModelsPutting LLMs to the Test: Analyzing Performance of Orca-3b, Llama2–7b and Platypus-13b Across Varied PromptsSep 15, 20237Sep 15, 20237
InTDS ArchivebyGuillaume WeingertnerRunning Local LLMs is More Useful and Easier Than You ThinkA step-by-step guide to run Llama3 locally with PythonJul 11, 202414Jul 11, 202414
InGenerative AIbyFabio MatricardiH2O revolution: not fresh water but Mobile friendly LLMMeet Danube3–0.5b chat model: tiny, fast and powerfulJul 17, 20241Jul 17, 20241
InGenerative AIbyFabio MatricardiOne click LLM are now reality. Road to portable — AI part 2Llamafile finally is doing the so long waiting magic: portable executable LLM, you run locally, for free, with one click.Jun 3, 20247Jun 3, 20247
InGenerative AIbyFabio MatricardiLLM ABCs: “Building LLM-Powered Applications” — A Gateway to AI’s FutureThe era of Generative AI 🚀, where even errors are opportunities to innovateJun 6, 20241Jun 6, 20241
InAI AdvancesbyFlorian JuneDemystifying PDF Parsing 03: OCR-Free Small Model-Based MethodOverview, Principles and InsightsJun 1, 20242Jun 1, 20242