Google AI researchers have made the Language Interpretability Tool (LIT) an open source platform for visualising, understanding and auditing natural language processing (NLP) models for the third-party developers.
LIT focuses on AI models and answers deep questions about their behavior like why AI models make certain predictions or can these predictions be attributed to adversarial behaviour, or to undesirable priors in the training set.
LIT calculates and displays metrics for entire data sets to spotlight patterns in model performance.
“In LIT’s metrics table, we can slice a selection by pronoun type and by the true referent,” according to the team behind LIT.
The tool supports natural language processing tasks like classification, language modeling, and structured prediction.
“LIT works with any model that can run from Python, the Google researchers say, including TensorFlow, PyTorch, and remote models on a server,” reports VentureBeat.
Natural language processing is a subfield of linguistics, computer science, information engineering, and Artificial Intelligence concerned with the interactions between computers and human languages, in particular how to programme computers to process and analyse large amounts of natural language data.
The Google LIT team said that in the near future, the tool set will gain features like counterfactual generation plug-ins, additional metrics and visualizations for sequence and structured output types, and a greater ability to customise the user interface (UI) for different applications.
If you have an interesting article / experience / case study to share, please get in touch with us at editors@expresscomputeronline.com