Mistral ships Le Chat – enterprise AI assistant that can run on prem

2025-05-0714:24508158mistral.ai

Your Enterprise. Your AI.

Today, we’re proud to introduce Le Chat Enterprise — a feature-rich AI assistant, powered by our brand new Mistral Medium 3 model. Solving enterprise AI challenges, like tool fragmentation, insecure knowledge integration, rigid models, and slow ROI, it delivers a unified AI platform for all organizational work.

Building on the foundation of Le Chat’s productivity tools, the new plan includes:

  • Enterprise search
  • Agent builders
  • Custom data and tool connectors
  • Document libraries
  • Custom models
  • Hybrid deployments

* Rolling out over the next two weeks.

We’re also announcing several big improvements to Le Chat Pro and Team — our plans for individuals and growing teams.

Le Chat Enterprise aims to provide AI productivity your team needs, in one platform, is fully private, and deeply customizable. Plus, our world-class AI engineering team offers support all the way through to value delivery.

Empower your team to be even more productive, more competitive, more everything.

Unified AI for all organizational work.

Transform complex tasks into achievable outcomes with AI that speaks every professional language.

Whether your team is analyzing data, writing code, or creating content, they can access cross-domain expertise through intuitive interfaces designed for both technical and non-technical users.

Enterprise search with secure data, tool connections and libraries.

Unlock intelligence from your enterprise data, starting with Google Drive, Sharepoint, OneDrive, Google Calendar, and Gmail. With more connectors coming soon, including templates to build your own.

  • Get improved, personalized answers by connecting Le Chat to your knowledge.

  • Organize external data sources, documents, and web content into complete knowledge bases for the most relevant answers.

  • Preview files quickly with Auto Summary for faster consumption.

Le Chat enables your team to maintain a handy personal library of frequently used documents across uploaded files as well as Drive / Sharepoint. Cite, extract, and analyze critical information.

We’re also adding MCP support soon, so your organization can easily connect Le Chat to even more enterprise systems.

Build and deploy custom AI agents for precise, automated task handling.

Automate routine tasks with AI agents, connected to your apps and libraries for contextual understanding across tools. Le Chat will enable your team to easily build custom assistants that match your own requirements — no code required.

Privacy-first.

Deploy Le Chat anywhere: self-hosted, in your public or private cloud, or as a service hosted in the Mistral cloud. Privacy-first data connections to enterprise tools —  with strict ACL adherence — ensuring full data protection and safety. 

Build your AI strategy with true flexibility — Mistral AI gives you the independence to choose your ideal infrastructure, without lock-in.

Complete control and configurability. 

We offer deep customizability and full control across the stack, from models and the platform, all the way to the interfaces.

You can customize your AI experience through bespoke integrations to your team’s enterprise data and custom platform and model capabilities, like personalizing your assistant with stored memories. Or take it further by enabling user feedback loops for continuous model self-improvement.

You'll have full control of your implementation within your security domain while providing employees access to SOTA intelligence.

Additionally, we provide comprehensive audit logging and storage.

Advanced solutioning and value delivery.

Leverage Mistral applied AI expertise to tailor models to fit your exact use case. We provide hands-on assistance by the world’s best AI engineers and scientists across deployment, solutioning, safety, and beyond.

Get started today.

Experience frontier artificial intelligence with Le Chat Pro, Team Enterprise plans, suited to your organization’s needs.

Le Chat Enterprise is now available in Google Cloud Marketplace, and will soon be on Azure AI and AWS Bedrock.

Contact us to learn more about how Le Chat Enterprise can transform your organization.

To get started with Le Chat today, try it at chat.mistral.ai, or download our mobile app from the App Store or Play Store — no credit card needed.


Read the original article

Comments

  • By codingbot3000 2025-05-0717:4112 reply

    I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.

    Btw, you can also run Mistral locally within the Docker model runner on a Mac.

    • By simonw 2025-05-0721:021 reply

      There are plenty of other ways to run Mistral models on a Mac. I'm a big fan of Mistral Small 3.1.

      I've run that using both Ollama (easiest) and MLX. Here are the Ollama models: https://ollama.com/library/mistral-small3.1/tags - the 15GB one works fine.

      For MLX https://huggingface.co/mlx-community/Mistral-Small-3.1-24B-I... and https://huggingface.co/mlx-community/Mistral-Small-3.1-24B-I... should work, I use the 8bit one like this:

        llm install llm-mlx
        llm mlx download-model mlx-community/Mistral-Small-3.1-Text-24B-Instruct-2503-8bit -a mistral-small-3.1
        llm chat -m mistral-small-3.1
      
      The Ollama one supports image inputs too:

        llm install llm-ollama
        ollama pull mistral-small3.1
        llm -m mistral-small3.1 'describe this image' \
          -a https://static.simonwillison.net/static/2025/Mpaboundrycdfw-1.png
      
      Output here: https://gist.github.com/simonw/89005e8aa2daef82c53c2c2c62207...

      • By indigodaddy 2025-05-0721:385 reply

        Simon, can you recommend some small models that would be usable for coding on a standard M4 Mac Mini (only 16G ram) ?

        • By simonw 2025-05-0721:421 reply

          That's pretty tough - the problem is that you need to have RAM left over to run actual applications!

          Qwen 3 8B on MLX runs in just 5GB of RAM and can write basic code but I don't know if it would be good enough for anything interesting: https://simonwillison.net/2025/May/2/qwen3-8b/

          Honestly though with that little memory I'd stick to running against hosted LLMs - Claude 3.7 Sonnet, Gemini 2.5 Pro, o4-mini are all cheap enough that it's hard to spend much money with them for most coding workflows.

          • By codetrotter 2025-05-081:403 reply

            How about on an MacBook Pro M2 Max with 64GB RAM? Any recommendations for local models for coding on that?

            I tried to run some of the differently sized DeepSeek R1 locally when those had recently come out, but couldn’t manage at the time to run any of them. And I had to download a lot of data to try those. So if you know a specific size of DeepSeek R1 that will work on 64GB RAM on MacBook Pro M2 Max, or another great local LLM for coding on that, that would be super appreciated

            • By freeqaz 2025-05-082:15

              I imagine that this in quantized form would fit pretty well and be decent. (Qwen R1 32b[1] or Qwen 3 32b[2])

              Specifically the `Q6_K` quant looks solid at ~27gb. That leaves enough headroom on your 64gb Macbook that you can actually load a decent amount of context. (It takes extra VRAM for every token of context you need)

              Rough math, based on this[0] calculator is that it's around ~10gb per 32k tokens of context. And that doesn't seem to change based on using a different quant size -- you just have to have enough headroom.

              So with 64gb:

              - ~25gb for Q6 quant

              - 10-20gb for context of 32-64k

              That leaves you around 20gb for application memory and _probably_ enough context to actually be useful for larger coding tasks! (It just might be slow, but you can use a smaller quant to get more speed.)

              I hope that helps!

              0: https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calcul...

              1: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32...

              2: https://huggingface.co/Qwen/Qwen3-32B-GGUF

            • By simonw 2025-05-083:451 reply

              I really like Mistral Small 3.1 (I have a 64GB M2 as well). Qwen 3 is worth trying in different sizes too.

              I don't know if they'll be good enough for general coding tasks though - I've been spoiled by API access to Claude 3.7 Sonnet and o4-mini and Gemini 2.5 Pro.

              • By aukejw 2025-05-0810:041 reply

                How do you determine peak memory usage? Just look at activity monitor?

                I've yet to find a good overview of how much memory each model needs for different context lengths (other than back of the envelope #weights * bits). LM Studio warns you if a model will likely not fit, but it's not very exact.

            • By aukejw 2025-05-086:41

              There are plenty of smaller (quantized) models that fit well on your machine! On a M4 with 24GB it’s already possible to comfortably run 8B quantized models.

              Im benchmarking runtime and memory usage for a few of them: https://aukejw.github.io/mlx_transformers_benchmark/

        • By jychang 2025-05-081:411 reply

          16GB on a mac with unified memory is too small for good coding models. Anything on that machine is severely compromised. Maybe in ~1 year we will see better models that fit in ~8gb vram, but not yet.

          Right now, for a coding LLM on a Mac, the standard is Qwen 3 32b, which runs great on any M1 mac with 32gb memory or better. Qwen 3 235b is better, but fewer people have 128gb memory.

          Anything smaller than 32b, you start seeing a big drop off in quality. Qwen 3 14b Q4_K_M is probably your best option at 16gb memory, but it's significantly worse in quality than 32b.

          • By chedabob 2025-05-089:241 reply

            What do you use to interface with Qwen?

            I have LMStudio installed, and use Continue in VSCode, but it doesn't feel nearly as feature rich compared to using something like Cursor's IDE, or the GitHub Copilot plugin.

            • By Lalabadie 2025-05-0812:22

              Continue can be your autocomplete provider – and use a smaller and faster model. Something like Cline (or Roo or Kilocode or another fork) would be the more Cursor-like assistant there.

        • By reichardt 2025-05-0721:44

          With around 4.6 GiB model size the new Qwen3-8B quantized to 4-bit should fit comfortably in 16 GiB of memory: https://huggingface.co/mlx-community/Qwen3-8B-4bit

        • By martin_a 2025-05-089:101 reply

          Strange idea, but if I'd like to set up a solid LLM for use in my home network, how much processing power would I need for a multi-purpose model?

          A Raspberry Pi? And old ThinkPad? A fully speced-out latest gen Macbook?

          edit: One of those old Mac Pros?

          • By wsintra2022 2025-05-0813:041 reply

            That’s what I tried initially, an old black tin can Mac Pro, but it couldn’t do it. Next splashed on an m2 ultra 64gb mpro, runs ollama with qwen3 32b - reverse shell into the localhost with open web-ui and automatic111 and voila AI on my home network

            • By martin_a 2025-05-0814:14

              Hm, that seems like a lot of power use. I thought I could get away with somewhat less.

        • By the_other_mac 2025-05-0811:45

          Run Mistral 7b in under 4gb ram:

          https://github.com/garagesteve1155/Overload

          (As announced this morning in the FB group "Dull Men's Club!)

    • By kergonath 2025-05-0718:084 reply

      > I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.

      Indeed. At work, we are experimenting with this. Using a cloud platform is a non-starter for data confidentiality reasons. On-premise is the way to go. Also, they’re not American, which helps.

      > Btw, you can also run Mistral locally within the Docker model runner on a Mac.

      True, but you can do that only with their open-weight models, right? They are very useful and work well, but their commercial models are bigger and hopefully better (I use some of their free models every day, but none of their commercial ones).

      • By distances 2025-05-0719:059 reply

        I also kind of don't understand how it seems everyone is using AI for coding. I haven't had a client yet which would have approved any external AI usage. So I basically use them as search engines on steroids, but code can't go directly in or out.

        • By fhd2 2025-05-0719:114 reply

          You might be able to get your clients to sign something to allow usage, but if you don't, as you say, it doesn't seem wise to vibe code for them. For two reasons:

          1. A typical contract transfers the rights to the work. The ownership of AI generated code is legally a wee bit disputed. If you modify and refactor generated code heavily it's probably fine, but if you just accept AI generated code en masse, making your client think that you wrote it and it is therefore their copyright, that seems dangerous.

          2. A typical contract or NDA also contains non disclosure, i.e. you can't share confidential information, e.g. code (including code you _just_ wrote, due to #1) with external parties or the general public willy nilly. Whether any terms of service assurances from OpenAI or Anthropic that your model inputs and outputs will probably not be used for training are legally sufficient, I have doubts.

          IANAL, and _perhaps_ I'm wrong about one or both of these, in one or more countries, but by and large I'd say the risk is not worth the benefit.

          I mostly use third party LLMs like I would StackOverflow: Don't post company code there verbatim, make an isolated example. And also don't paste from SO verbatim. I tried other ways of using LLMs for programming a few times in personal projects and can't say I worry about lower productivity with these limitations. YMMV.

          (All this also generally goes for employees with typical employment contracts: It's probably a contract violation.)

          • By jstummbillig 2025-05-0721:033 reply

            Nobody is seriously disputing the ownership of AI generated code. A serious dispute would be a considerable, concerted effort to stop AI code generation in any jurisdiction, that provides a contrast to the enormous, ongoing efforts by multiple large players with eye-watering investments to make code generation bigger and better.

            Note, that this is not a statement about the fairness or morality of LLM building, but to think that the legality of AI code generation is something to reasonably worry about, is betting against multiple large players and their hundreds of billions of dollars in investment right now, and that likely puts you in a bad spot in reality.

            • By reverius42 2025-05-0722:571 reply

              > Nobody is seriously disputing the ownership of AI generated code

              From what I've been following it seems very likely that, at least in the US, AI-generated anything can't actually be copyrighted and thus can't have ownership at all! The legal implications of this are yet to percolate through the system though.

              • By staunton 2025-05-0723:51

                Only if that interpretation lasts despite likely intense lobbying to the contrary.

            • By cess11 2025-05-088:43

              Other forms of LLM output is being seriously challenged however.

              https://llmlitigation.com/case-updates.html

              Personally I have roughly zero trust in US courts on this type of issue but we'll see how it goes. Arguably there are cases to be made where LLM:s cough up code cribbed from repos with certain licenses without crediting authors and so on. It's probably a matter of time until some aggressively litigious actors do serious, systematic attempts at getting money out of this, producing case law as a by product.

              Edit: Oh right, Butterick et al went after Copilot and image generation too.

              https://githubcopilotlitigation.com/case-updates.html

              https://imagegeneratorlitigation.com/case-updates.html

            • By mistrial9 2025-05-080:412 reply

              this is "Kool-aid" from the supply side of LLMs for coding IMO. Plenty of people are plenty upset about the capture of code at Github corral, fed into BigCorp$ training systems.

              parent statement reminds me of smug French in a castle north of London circa 1200, with furious locals standing outside the gates, dressed in rags with farm tools as weapons. One well-equipped tower guard says to another "no one is seriously disputing the administration of these lands"

              • By _joel 2025-05-0810:32

                Your mother was a hamster and your father smelt of elderberries?

              • By jstummbillig 2025-05-087:00

                I think the comparison falls flat, but it's actually really funny. I'll keep it in mind.

          • By distances 2025-05-0719:461 reply

            Yes these are indeed the points. I don't really care too much, it would make me a bit more efficient but I'm billing by the hour anyway so I'm completely fine playing by the book.

            • By fhd2 2025-05-0719:571 reply

              Not sure I can agree with the "I'm billing by the hour" part.

              I mean sure, but I think of my little agency providing value, for a price. Clients have budgets, they have limited benefits from any software they build, and in order to be competitive against other agencies or their internal teams, overall, I feel we need to provide a good bang for buck.

              But since it's not all that much about typing in code, and since even that activity isn't all that sped up by LLMs, not if quality and stability matters, I would still agree that it's completely fine.

              • By distances 2025-05-0720:48

                Yes, it's important of course that I'm efficient, and I am. But my coding speed isn't the main differentiating factor why clients like me.

                I meant that I don't care enough to spearhead and drive this effort within the client orgs. They have their own processes, and internal employees would surely also like to use AI, so maybe they'll get there eventually. And meanwhile I'll just use it in the approved ways.

          • By _bin_ 2025-05-0721:171 reply

            This comes down to a question of what one can prove. NNs are necessary not explainable and none of this would have much evidence to show in court.

            • By fhd2 2025-05-085:141 reply

              Sure there's evidence: Your statements about this when challenged. And perhaps to a degree the commit log, at least that can arouse suspicion.

              Sure, you can say "I'd just lie about it". But I don't know how many people would just casually lie in court. I sure wouldn't. Ethics is one thing, it takes a lot of guts, considering the possible repercussions.

              • By _bin_ 2025-05-086:431 reply

                "I do not recall"

                • By fhd2 2025-05-087:21

                  Yup, Gates style would work. But billionaires have a tendency to not get into serious trouble for lying to the public, a court, congress and what not. Commoners very much do.

          • By genghisjahn 2025-05-0719:593 reply

            What about 10 years ago when we all copied code from SO? Did we worry about copyright then? Maybe we did and I don’t recall.

            • By layer8 2025-05-0720:49

              “We” took care to not copy it verbatim (it’s the concrete code form that is copyrighted, not the algorithm), and depending on jurisdiction there is the concept of https://en.wikipedia.org/wiki/Threshold_of_originality in copyright law, which short code snippets on Stack Overflow typically don’t meet.

            • By fhd2 2025-05-0720:071 reply

              It's roughly the same, legally, and I was well aware of that.

              Legally speaking, you also want to be careful about your dependencies and their licenses, a company that's afraid to get sued usually goes to quite some lengths to ensure they play this stuff safe. A lot of smaller companies and startups don't know or don't care.

              From a professional ethics perspective, personally, I don't want to put my clients in that position unless they consciously decide they want that. They hire professionals not just to get work done they fully understand, but to a large part to have someone who tells them what they don't know.

              • By genghisjahn 2025-05-0720:34

                You raise a good point. It was kinda gray in the SO days. You almost always had to change something to get your code to work. But at lot of LLM's can spit out code that you can just paste in. And, I guess maybe the tests all pass, but if it goes wrong, you, the coder probably don't know where it went wrong. But if you'd written it all yourself, you could probably guess.

                I'm still sorting all this stuff out personally. I like LLM's when I work in an area I know well. But vibing in areas of technology that I don't know well just feels weird.

            • By pfannkuchen 2025-05-0720:33

              SO seems different because the author of the post is republishing it. If they are republishing copyrighted material without notice, it seems like the SO author is the one in violation of copyright.

              In the LLM case, I think it’s more of an open question whether the LLM output is republishing the copyrighted content without notice, or simply providing access to copyrighted content. I think the former would put the LLM provider in hot water, while the latter would put the user in hot water.

        • By shmel 2025-05-0719:443 reply

          How is it different from the cloud? Plenty startups store their code on github, run prod on aws, and keep all communications on gmail anyway. What's so different about LLMs?

          • By simion314 2025-05-0720:43

            >How is it different from the cloud? Plenty startups store their code on github, run prod on aws, and keep all communications on gmail anyway. What's so different about LLMs?

            Those plenty startups will also use Google, OpenAi or the built in Microsoft AI.

            This is clearly for companies that need to keep the sensitive data under their control. I think they also get support with adding more training to the model to be personalized for your needs.

          • By layer8 2025-05-0720:52

            It’s not different. If you have a confidentiality requirements like that, you also don’t store your code off-premises. At least not without enforceable contracts about confidentiality with the service provider, approved by the client.

          • By jamessinghal 2025-05-0719:571 reply

            I think it's a combination of a fundamental distrust of the model makers and a history of them training on user data with and without consent.

            The main players all allow some form of zero data retention but I'm sure the more cautious CISO/CIOs flat out don't trust it.

            • By tcoff91 2025-05-0720:22

              I think that using something like Claude on Amazon Bedrock makes more sense than directly using Anthropic. Maybe I'm naive but I trust AWS more than Anthropic, OpenAI, or Google to not misuse data.

        • By mark_l_watson 2025-05-0719:20

          I have good results running Ollama locally with olen models like Gemma 3, Qwen 3, etc. The major drawback is slower inference speed. Commercial APIs like Google Gemini are so much faster.

          Still, I find local models very much worth using after taking the time to set them up with Emacs, open-codex, etc.

        • By trollbridge 2025-05-0719:13

          Most my clients have the same requirement. Given the code bases I see my competition generating, I suspect other vendors are simply violating this rule.

        • By abujazar 2025-05-080:11

          You can set up your IDE to use local LLMs through e.g. Ollama if your computer is powerful enough to run a decent model.

        • By crimsoneer 2025-05-084:221 reply

          Are your clients not on AWS/Azure/GCP? They all offer private LLMs out of the box now.

          • By ATechGuy 2025-05-0821:46

            That was my question too.

        • By blitzar 2025-05-0811:02

          I also kind of don't understand how it seems everyone is using AI for doing their homework. I haven't had a teacher yet which would have approved any AI usage.

          Same process, less people being called out for "cheating" in a professional setting.

        • By Pamar 2025-05-086:18

          Personally I am trying to see if we can leverage AI to help write design documents instead of code, based on a fairly large library of human (poorly) written design documents and bug reports.

        • By betterThanTexas 2025-05-0720:34

          I would take any such claim with a heavy rock of salt because the usefulness of AI is going to vary drastically with the sort of work you're tasked with producing.

      • By demarq 2025-05-0814:25

        Also it’s like saying you can host a database on your Mac.

        Unless you have experience hosting and maintaining models at scale and with an enterprise feature set, then I believe what they are offering is beyond (for now) what you’d be able put up on your own.

      • By ATechGuy 2025-05-0721:36

        Have you tried using private inference that uses GPU confidential computing from Nvidia?

    • By lolinder 2025-05-0723:361 reply

      Game changer feels a bit strong. This is a new entry in a field that's already pretty crowded with open source tooling that's already available to anyone with the time and desire to wire it all up. It's likely that they execute this better than the community-run projects have so far and make it more approachable and Enterprise friendly, but just for reference I have most of the features that they've listed here already set up on my desktop at home with Ollama, Open WebUI, and a collection of small hand-rolled apps that plug into them. I can't run very big models on mine, obviously, but if I were an Enterprise I would.

      The key thing they'd need to nail to make this better than what's already out there is the integrations. If they can make it seamless to integrate with all the key third-party enterprise systems then they'll have something strong here, otherwise it's not obvious how much they're adding over Open WebUI, LibreChat, and the other self-hosted AI agent tooling that's already available.

      • By troyvit 2025-05-0814:59

        > crowded with open source tooling that's already available to anyone with the time and desire to wire it all up.

        Those who don't have the time and desire to wire it all up probably make up a larger part of the market than those who do. It's a long-tail proposition, and that might be a problem.

        > I have most of the features that they've listed here already set up on my desktop at home

        I think your boss and your boss' boss are the audience they are going for. In my org there's concern over the democratization of locally run LLMs and the loss of data control that comes with it.

        Mistral's product would allow IT or Ops or whatever department to set guardrails for the organization. The selling point that it's turn-key means that a small organization doesn't have to invest a ton of time into all the tooling needed to run it and maintain it.

        Edit: I just re-read your comment and I do have to agree though. "game-changer" is a bit strong of a word.

    • By abujazar 2025-05-080:08

      Actually you shouldn't be running LLMs in Docker on Mac because it doesn't have GPU support. So the larger models will be extremely slow if they'll even produce a single token.

    • By burnte 2025-05-0719:19

      I have an M4 Mac Mini with 24GB of RAM. I loaded Studio.LM on it 2 days ago and had Mistral NeMo running in ten minutes. It's a great model, I need to figure out how to add my own writing to it, I want it to generate some starter letters for me. Impressive model.

    • By raxxorraxor 2025-05-088:561 reply

      I think the the standard setup for vscode continue for ollama is already 99% of ai coding support I need. I think it is even better than commercial offerings like cursor, at least in the projects and languages I use and have tested it.

      We had a Mac Studio here nobody was using and it we now use it as a tiny AI station. If we like, we could even embed our codebases, but it wasn't necessary yet. Otherwise it should be easy to just buy a decent consumer PC with a stronger GPU, but performance isn't too bad even for autocomplete.

      • By thepill 2025-05-089:03

        Which models are you using?

    • By Palmik 2025-05-0813:24

      I really don't see the big deal. Gemini also allows on-prem in similar fashion: https://cloud.google.com/blog/products/ai-machine-learning/r...

    • By nicce 2025-05-0720:421 reply

      > Btw, you can also run Mistral locally within the Docker model runner on a Mac.

      Efficiently? I thought macOS does not have API so that Docker could use GPU.

    • By v3ss0n 2025-05-0720:581 reply

      What's the point when we can run much powerful models now? Qwen3 , Deepseek

      • By _bin_ 2025-05-081:581 reply

        It would be short-termist for Americans or euros to use chinese-made models. Increasing their popularity has an indirect but significant cost in the long term. china "winning AI" should be an unacceptable outcome for America or europe by any means necessary.

        • By atwrk 2025-05-088:461 reply

          Why would that be? I can see why Americans wouldn't want to do that, but Europeans? In the current political climate, where the US openly claims their desire to annex European territory and so on? I'd rather see them prefer a locally hostable open source solution like DeepSeek.

          • By tigroferoce 2025-05-0811:081 reply

            My two cents, as European, is that since we are more and more asking to LLMs for information, it wouldn't be wise to let a foreign country, not even truly democratic, to choose the information we get.

            • By jamesblonde 2025-05-0816:501 reply

              The Chinese don't get any of information if we use self-hosted DeepSeek or Qwen. They are open-source. You can run them in an air-gapped environment that can't phone home.

              • By fennecbutt 2025-05-1011:15

                But their models are gimped by bad censoring. At least I can still ask chatgpt how many innocent civilians America has bombed.

    • By ulnarkressty 2025-05-0719:391 reply

      I think many in this thread are underestimating the desire of VPs and CTOs to just offload the risk somewhere else. Quite a lot of companies handling sensitive data are already using various services in the cloud and it hasn't been a problem before - even in Europe with its GDPR laws. Just sign an NDA or whatever with OpenAI/Google/etc. and if any data gets leaked they are on the hook.

      • By boringg 2025-05-0719:45

        Good luck ever winning that one. How are you going to prove out a data leak with an AI model without deploying excessive amounts of legal spend?

        You might be talking about small tech companies that have no other options.

    • By dzhiurgis 2025-05-089:38

      How many is many? Literally all of them use cloud services.

    • By ATechGuy 2025-05-0721:24

      Why not use confidential computing based offerings like Azure's private inference for privacy concerns?

  • By beernet 2025-05-0811:084 reply

    Mistral really became what all the other over-hyped EU AI start-ups / collectives (Stability, Eleuther, Aleph Alpha, Nyonic, possibly Black Forest Labs, government-funded collaborations, ...) failed to achieve, although many of them existed way before Mistral. Congrats to them, great work.

    • By Palmik 2025-05-0813:251 reply

      It feels to me they turned into a generic AI consulting & solutions company. That does not mean it's a bad business, especially since they might benefit from the "built in EU" spin (whether through government contracts, regulation, or otherwise).

      One can deploy similar solution (on-prem) using better and more cost efficient open-source models and infrastructure already.

      What Mistral offers here is managing that deployment for you, but there's nothing stopping other companies doing the same with fully open stack. And those will have the benefit of not wasting money on R&D.

      • By jamesblonde 2025-05-0816:35

        That's what we do with Hopsworks - EU built platform for developing and operating AI systems. We have customers running DeepSeek-v3 and Llama models. I never thought about slapping a Chat UI on it and selling the Chat app as a ready made product for the sovereign AI market. But why not.

    • By stogot 2025-05-0812:441 reply

      I’m wondering why. More funding, better talent, strategy, or something else?

      • By agumonkey 2025-05-0813:061 reply

        i'm an outsider but none of the startups mentioned above ever came to my ears. Mistral suddenly popped after openai/anthropic exploded, and they were rapidly described as the 3rd contender, with emphasis on technical merit. Maybe i was fooled though.

        • By danielbln 2025-05-0813:23

          Black Forest Labs are the makers of FLUX, which for a while was the best open image model available (and generally a pretty strong image model). That said, now with a wave of Chinese models and the advent of autoregressive image models, I'm not sure how much that will stay true.

    • By bobxmax 2025-05-0814:53

      is Mistral really doing anything here? Llama models are open source, Cohere runs on prem etc

    • By retinaros 2025-05-0815:231 reply

      what did they achieve exactly?

      • By beernet 2025-05-128:51

        Signs of market traction and executing on product development. All other mentioned companies never made it there.

  • By 85392_school 2025-05-0717:32

    This announcement accompanies the new and proprietary Mistral Medium 3, being discussed at https://news.ycombinator.com/item?id=43915995

HackerNews