AI Chatbots on Causality

Having recently encountered some major causality train wrecks, I got curious about LLM “understanding” of causality. If AI chatbots are trained on the web corpus, and the web doesn’t “get” causality, there’s no reason to think that AI will make sense either.

TLDR; ChatGPT and Bing utterly fail this test, for reasons that are evident in Google Bard’s surprisingly smart answer.

ChatGPT: FAIL

Bing: FAIL

Google Bard: PASS

Google gets strong marks for mentioning a bunch of reasons to expect that we might not find a correlation, even though x is known to cause y. I’d probably only give it a B+, because it neglected integration and feedback, but it’s a good answer that properly raises lots of doubts about simplistic views of causality.

Assessing the predictability of nonlinear dynamics

An interesting exploration of the limits of data-driven predictions in nonlinear dynamic problems:

Assessing the predictability of nonlinear dynamics under smooth parameter changes
Simone Cenci, Lucas P. Medeiros, George Sugihara and Serguei Saavedra
https://doi.org/10.1098/rsif.2019.0627

Short-term forecasts of nonlinear dynamics are important for risk-assessment studies and to inform sustainable decision-making for physical, biological and financial problems, among others. Generally, the accuracy of short-term forecasts depends upon two main factors: the capacity of learning algorithms to generalize well on unseen data and the intrinsic predictability of the dynamics. While generalization skills of learning algorithms can be assessed with well-established methods, estimating the predictability of the underlying nonlinear generating process from empirical time series remains a big challenge. Here, we show that, in changing environments, the predictability of nonlinear dynamics can be associated with the time-varying stability of the system with respect to smooth changes in model parameters, i.e. its local structural stability. Using synthetic data, we demonstrate that forecasts from locally structurally unstable states in smoothly changing environments can produce significantly large prediction errors, and we provide a systematic methodology to identify these states from data. Finally, we illustrate the practical applicability of our results using an empirical dataset. Overall, this study provides a framework to associate an uncertainty level with short-term forecasts made in smoothly changing environments.

AI babble passes the Turing test

Here’s a nice example of how AI is killing us now. I won’t dignify this with a link, but I found it posted by a LinkedIn user.

I’d call this an example of artificial stupidity, not AI. The article starts off sounding plausible, but quickly degenerates into complete nonsense that’s either automatically generated or translated, with catastrophic results. But it was good enough to make it past someone’s cognitive filters.

For years, corporations have targeted on World Health Organization to indicate ads to and once to indicate the ads. AI permits marketers to, instead, specialize in what messages to indicate the audience, therefore, brands will produce powerful ads specific to the target market. With programmatic accounting for 67% of all international show ads in 2017, AI is required quite ever to make sure the inflated volume of ads doesn’t have an effect on the standard of ads.

One style of AI that’s showing important promise during this space is tongue process (NLP). informatics could be a psychological feature machine learning technology which will realize trends in behavior and traffic an equivalent method an individual’s brain will. mistreatment informatics during this method can match ads with people supported context, compared to only keywords within the past, thus considerably increasing click rates and conversions.