Have to disagree with your discussion of internet data causing chatbots to lean liberal. There is evidence to show that tariffs are paid by the USA recipients and not the external countries. If you say otherwise, a good chatbot will call that misinformation. That’s not politics. If you say there is no global warming trend, a chatbot can check climate records. All of the other models, besides grok, provide good quality sources, especially if you ask for resources. They will be respected sources.
Only grok shows evidence of checking Elon musk‘s opinion in it’s reasoning. Since Elon is not an economist, a climatologist, a historian, nor a public health scientist, then he may not be considered a reasonable source. Perhap if the question is in regards to undergraduate physics or making business decisions, it might be valid to check his opinion.
So, I’ll stick with other models.
MyModel Training Start -> low quality internet crawl
-> higher quality public sources: Wikipedia, stack overflow
-> higher quality licensed books, research papers
-> high-quality curated corpora
-> Finish with feedback on accuracy.
So models aren’t just trained on low quality internet data.
Is it just that the truth is inconvenient? It’s not politics.