Shine in data science interviews by mentioning these topics

If you have all the technical skills as a data scientist, you have a good chance of acing your job interview. But if you want an extra boost there are some other things you should demonstrate about yourself. One of them is something I mentioned in my previous articles: soft skills. But today I want to talk about something else.

Data Science is evolving rapidly. New techniques are being developed, new tools are being launched and new approaches are being discovered. Naturally, companies are leaning towards hiring people who are willing to keep up. Of course, you don’t need to know every single new thing that is happening in the world of AI or data science to prove that you are interested in. Some advancements are just too specific for all data scientists to know. But there are some topics that concern everyone in the area of AI from data scientist to project manager. Topics that everyone should know about and have an opinion on.

Two such topics in data science are explainability and AI Ethics. These topics are gaining traction to become two of the most important extracurricular themes in AI. Demonstrating your understanding of these themes can do wonders for your prospects. Even if it means just showing that you gave them some thought.

In this article, I will introduce explainability and AI Ethics to you, explain what they are and share some of the commonly used tools to apply them to everyday projects. After reading this article, you should be able to comfortably discuss these themes in a job interview and know how to apply them to your personal or professional projects.

Before we get into detail, I would like to point out that these two topics are related. As AI got more widely adapted, naturally, people wanted to understand how the AI systems made their decisions and how they were so successful. Not only out of curiosity but because of the effect AI started to have in people’s lives. Today among many other things, AI algorithms are used to decide who to fire or who to give a loan to.

Of course, we have a good idea of the theory of machine learning algorithms and how they work in theory. But, with many of those algorithms, it is not possible to see which attribute of a specific case causes the algorithm to decide one way or the other. Not until recently, anyway. And so, it was not, and to a certain extent still is not, possible to justify the decision a machine learning algorithm makes.

There have been examples of unfair applications caused simply by a lack of awareness about this issue. Companies realizing that the AI system they have in place favouring a certain race or discriminating against a certain gender. This led the public to start discussing regulating AI technology and in general AI Ethics. Let’s take a closer look at these themes.

Explainability

Machine Learning algorithms are referred to as “black boxes” by many who are teaching it. It is a pretty accurate description because we don’t really know what’s happening in there. The image below shows what it looks like in simple terms. You give some input and something happens and you get some output. You can change some parameters about what happens in the black box and make the output better but you do not know why.

It might be comforting to believe in a machine learning fairy taking care of your troubles and you might say, “Hey, if it’s working, it’s working. No need to snoop around inside the model.” Let’s see if I can change your mind with this example:
Researchers trained a model with pictures of wolves and huskies.[1] The goal of the model was to classify a certain image as husky or wolf. The model did great with high accuracy. When the researcher looked to see what was happening and what information on the picture the model was using to decide if a photo was of a wolf or a husky they saw this:

The explanation showed that all the photos of wolves had snow in the background. Thus, the model was optimized to classify based on snow in the background or not. And when it was given a photo of a Husky with snow in the background, it, naturally, failed to classify it properly because it was catching the wrong pattern, to begin with. Just a disclaimer here, this algorithm was trained with hand-selected data specifically to research model explanations. But these type of un-obvious, undesirable and hard to detect correlations in data, has the potential to affect machine learning models significantly as in the example.

Apart from the fact that explainability helps us make fewer mistakes by seeing what an algorithm bases its decisions on, there is a great advantage of understanding successful models too. By making visible the pattern that a machine learning algorithm bases its decisions on, we can have a better understanding of the problem at hand.

Before going into some tools for explainability, it is good to note that some algorithms are easier to explain than others even without explainability tools. Feature importance charts are a good way to see how important a certain feature is in the overall decision process. It is possible with tree-based algorithms such as decision trees, random forests and gradient boosted trees. You have to keep in mind though that the importances shown on feature importance charts are not independent of each other. And it is not easy to compare the effects of changing feature values. Recently developed explainability tools do a good job explaining each instance by displaying importance or weight for each feature.

Some tools for interpreting machine learning algorithms are SHAP, Lime (the ones I used so far), XGBoostExplainer (specific for xgboost), Skater and ELI5.

Here is what the explanation looks like with some of these tools.

SHAP explains a model's decisions by assigning a value called “the Shapley value” to each feature. That value becomes the importance of that feature. The Shapley values decide how to distribute the prediction among the features. A resulting explanation of an individual data point looks like this:

This specific use case was determining the likelihood of a patient having a heart disease. The value marked “output value” on top of the line is the prediction for this patient. 1 means heart disease present and 0 means no heart disease present. The blue and red arrows show in which way a certain feature affect the prediction.

Looks like this patient’s sex and thal value higher their possibility of having a heart disease whereas other features like chest pain type or the number of blocked major vessels point to a lower risk of heart disease. The length of the arrows shows how important a certain feature is in comparison to others. So for this example, the value of thal is much more important, as in it makes heart disease much more likely in comparison to how much more likely the patient’s sex makes it.

Lime assumes the model to be a black box as we do and looks at the relationship between the input and output of the model. To understand how the model makes decisions, it tweaks the input to see how the output changes and explains individual instances with this technique. The same instance from the same model explained using LIME looks like this.

The chart on the left shows the actual prediction, the one in the middle shows how Lime explains the contribution of each feature to the overall outcome for this instance. Again the length and also the order (from top to bottom) of the bars show the importance/weight of features on the final decision.

The two explainers seem to have a similar idea as to how this person was predicted to have a 50% chance of having a heart disease. But this is not always the case. Because these explainers try to explain the model predictions using different techniques and because these techniques are not able to fully replicate the model prediction process but merely estimate it, they tend to have instances where they don’t agree. On a use case like this where we can use common sense to a certain extent, it might seem easy to understand if the explainers and/or the model is making sense. But when we are working on a domain we are not familiar with or not experts in, it becomes trickier to decide what explanation to use. This reminds us to take the explanations with a grain of salt and not make grand claims based on them.

AI Ethics 

The lack of explainability of models and a couple of unfortunate AI applications caused concerns in the AI community throughout the years. People realized AI systems started reflecting the un-ideal parts of real life. Which is not surprising because a machine learning algorithm is created to learn patterns in data. If you feed it the data of alls CEOs in the world, naturally, next time you ask it to give a likelihood of someone being a CEO, it will give lower probabilities to women than men. Simply because it is the pattern that is being presented in the data. 

AI Ethics is simply an effort to make sure the AI systems are fair and trustworthy. We do not want the AI systems that we train to be biased or discriminate against any group of people.

There are a couple of main problems in AI Ethics. Let’s go over them.

First one is the problem of bias. If you train your model with a biased dataset, naturally you will end up with a biased dataset that discriminates against certain groups. How does it look in real life you ask? Here are some examples.

Here is a pretty embarrassing example where a passport application system mistook a man with Asian descent to a person with their eyes closed. [2]

One from Amazon, where male job applicants were favoured over their female counterparts simply because, in their training data, the previously hired persons were all men. This led the model to believe that men were more suited to hold this position. [3]

This one from Google is one of my favourites. Just a disclaimer, this is an example from a couple of years ago. Google has a solution to this problem now, which I will share in a second. But back to the biased system; a couple of years ago, when you translated the phrase “She is a doctor. He is a nurse.” into a language without gender-specific pronouns and then translated it back, you would have gotten a pretty sexist translation as if the Google translate didn’t give it a chance that the doctor could be a woman. This one again was caused by the biased dataset the model was trained on.

The way to deal with bias is:

  • Not including personal information in model training as much as possible. If you are developing a loan decision model you know it’s not relevant for the model to know if this person is a man or a woman. 
  • If you absolutely need to include personal information, make sure that your dataset is balanced and is not biased for or against any certain group. There are tools and metrics to calculate the level of bias. One example tool is IBM AI fairness 360 toolkit.
  • Only excluding personal information from your dataset might not be enough. You should make sure that there are no hidden proxies to personal information in the dataset. For example, you might think that you are not including race information in training but if the dataset is from a highly segregated city where people from the same race tend to live in same neighbourhoods, by including the postcode information you would be including the race information.
  • Always, even when you don’t have personal information, make sure that your dataset represents all possible groups fairly in the dataset.
  • Keep an eye out for potential biased applications and adapt your system. There are many chard-to-find corner cases. Make sure to take creative measures. As an example, here is how Google translate adapted after this problem was discovered. [4] They now show you both options when the language is ambiguous.

I’ve shown you some examples of problems caused by biased data but bias is not the only problem of AI Ethics. Misuse of AI systems can cause serious unethical practices.

Here is an example of a school teacher, who was praised very highly by her students and the parents, who found herself fired. She was fired based on the statistical evaluation of her student’s grades, this teacher was not contributing enough to the overall math score of the school. So due to the system in place, she was fired even though she was a beloved teacher. [5]

Another example is of a study done by universities about college students who drop out. Based on the data of students over the past years, a model was trained to rate the possibility of a certain student dropping out after a couple of years. One university used this model to rate the possibility of drop out for every student who applied and eliminate the ones with a higher chance of dropping out. I’m sure you see why this is unethical and unfair. It reminds me of the movie “Minority Report” where they arrest people before committing a crime based on an oracle’s visions.

We shouldn’t lose hope in the usefulness of AI systems because of these examples. There are ways to integrate AI systems into decision processes and have them support the decisions rather than push it. For example, going back to the drop out probability calculating model, another university did not let it affect its admissions but rather set up a platform to support its students with a higher chance of dropping out to make sure they stayed in school.

After all, these systems are made by us and using their outcomes ethically by not letting them be reasons to be unfair is also on us.

Another question in AI Ethics is the responsibility of AI. Responsibility of AI is more of a problem for the future. It is the question of “If an AI system hurts a human, who is responsible?” Of course, this question is not referring to robots rebelling but to self-driving cars and automated smart systems in general. A good example of this question is the trolley problem.

The trolley problem is the ethical question shown below in the photo. If a trolley is about to hit and kill 5 people and we have the change to divert it and kill only one person, would we do it? Or rather, what is the ethical thing to do? The one person is not in any immediate danger unless we decide to divert the trolley and save the other five people.

This problem in AI responsibility is posed like this: Let’s say there is a self-driving car, if the car goes straight it will hit a wall and people in the car will be killed but if it swerves left it will hit other people and they will be killed. One detail is that the people crossing the street are crossing a red light. What should we teach the car to do? What if the people crossing the street were old people? What if they were pregnant women?

It is a hard question to answer. But it is one that needs an answer. Let’s say, we decide during the car’s design process that it should protect the people in the car at all costs. What if this scenario happens and the people crossing the street are killed at the accident? Whose fault is it then?

As you can see from the examples and questions we are still pondering over, AI Ethics still has a long way to go. Attaining ethical AI, unfortunately, is not as easy as going over a checklist. There are best practices the community has discovered over the years by making mistakes and realizing these mistakes, but we do not have an exhaustive list of everything that can go wrong yet. Best we can do is to keep an eye out for possible unethical applications and take necessary precautions to be one step ahead of them.

I hope this article was a good starting point for you. You should now be able to describe what explainability means in the context of AI and what AI ethics entail. I’m sure your potential employers will be impressed to hear that you have not only heard about these terms but did some reading on it enough to give some examples of what can go wrong and how to address them. It might be even better if you can ask your interviewer about what they do to address these issues in their work if it is applicable.

On top of being a great conversation topic during data science interviews, these topics are going to be the main themes in your professional life once you start working and for good reason. So keep them in your mind and read up as much as you can.

If you are interested in learning more, here are some articles I came across:

Explainability:

Hands-on Machine Learning Model Interpretation

AI ethics:

The Hitchhiker’s Guide to AI Ethics

References:

  1. "Why Should I Trust You?": Explaining the Predictions of Any Classifier
  2. New Zealand passport robot tells applicant of Asian descent to open eyes
  3. Amazon scraps secret AI recruiting tool that showed bias against women
  4. Reducing gender bias in Google Translate
  5. 'Creative ... motivating' and fired