Deepseek V3 chat was released and it's a very powerful model, on the level and outperforming current SOTA models by Anthropic, Google, and OpenAI. It is open-weight and you can download it and at least theoretically run it at home, although it's a 600b parameter MoE model (256 experts) it's not very practical to actually run for most hardware. (even though still quite a bit easier to run than a dense 600b. you could theoretically run it off CPU with 400GB+ RAM)
The most interesting thing about Deepseek V3 is that it is a very, very efficent model using some very novel things (like being 8-bit by design) clearly designed as a result of a GPU-starved China. Interna of the big three american companies and their SOTA models are not known, but I personally think it would not be a stretch to assume that V3 is magnitudes more computing-efficent, while being just as good and better. It was definitively magnitudes cheaper to train. As a result, running it via the deepseek API is very cheap, it costs about 10% of what 4o costs. (These are the classic deepseek prices, they already announced they will raise prices in Feburary but it still will be much cheaper than any other API model; even with heavy use you would not pay much)
It's also uncensored and has no specific noticable bias towards safety. The chinese once again creating facts in this sphere.