Linear Regression Questions And Answers Pdf

linear regression questions and answers pdf

File Name: linear regression questions and answers .zip
Size: 1773Kb
Published: 29.05.2021

Our servers are currently busy or they are down for maintenance.

You need to have JavaScript enabled in order to access this site.

It is a common practice to test data science aspirants on commonly used machine learning algorithms in interviews. These conventional algorithms being linear regression, logistic regression, clustering, decision trees etc. Data scientists are expected to possess an in-depth knowledge of these algorithms. We consulted hiring managers and data scientists from various organisations to know about the typical ML questions which they ask in an interview.

Based on their extensive feedback a set of question and answers were prepared to help aspiring data scientists in their conversations. In simple terms, linear regression is a method of finding the best straight line fitting to the given data, i. In technical terms, linear regression is a machine learning algorithm that finds the best linear-fit relationship on any given data, between independent and dependent variables.

It is mostly done by the Sum of Squared Residuals Method. Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data. In layman terms, feature engineering means the development of new features that may help you understand and model the problem in a better way. Feature engineering is of two kinds — business driven and data-driven.

Business-driven feature engineering revolves around the inclusion of features from a business point of view. The job here is to transform the business variables into features of the problem.

In case of data-driven feature engineering, the features you add do not have any significant physical interpretation, but they help the model in the prediction of the target variable.

To apply feature engineering, one must be fully acquainted with the dataset. This involves knowing what the given data is, what it signifies, what the raw features are, etc. You must also have a crystal clear idea of the problem, such as what factors affect the target variable, what the physical interpretation of the variable is, etc.

What is the use of regularisation? Explain L1 and L2 regularisations. Regularisation is a technique that is used to tackle the problem of overfitting of the model. When a very complex model is implemented on the training data, it overfits.

At times, the simple model might not be able to generalise the data and the complex model overfits. To address this problem, regularisation is used. Regularisation is nothing but adding the coefficient terms betas to the cost function so that the terms are penalised and are small in magnitude. This essentially helps in capturing the trends in the data and at the same time prevents overfitting by not letting the model become too complex.

Selecting the value of learning rate is a tricky business. If the value is too small, the gradient descent algorithm takes ages to converge to the optimal solution. On the other hand, if the value of the learning rate is high, the gradient descent will overshoot the optimal solution and most likely never converge to the optimal solution. To overcome this problem, you can try different values of alpha over a range of values and plot the cost vs the number of iterations.

Then, based on the graphs, the value corresponding to the graph showing the rapid decrease can be chosen. The aforementioned graph is an ideal cost vs the number of iterations curve. Note that the cost initially decreases as the number of iterations increases, but after certain iterations, the gradient descent converges and the cost does not decrease anymore.

If you see that the cost is increasing with the number of iterations, your learning rate parameter is high and it needs to be decreased. Selecting the regularisation parameter is a tricky business. What you can do is have a sub-sample of data and run the algorithm multiple times on different sets.

Here, the person has to decide how much variance can be tolerated. One can use linear regression for time series analysis, but the results are not promising.

So, it is generally not advisable to do so. The reasons behind this are —. Ans The sum of the residuals of a linear regression is 0. Linear regression works on the assumption that the errors residuals are normally distributed with a mean of 0, i. So, the sum of all the residuals is the expected value of the residuals times the total number of data points.

Since the expectation of residuals is 0, the sum of all the residual terms is zero. Ans Multicollinearity occurs when some of the independent variables are highly correlated positively or negatively with each other. This multicollinearity causes a problem as it is against the basic assumption of linear regression. The presence of multicollinearity does not affect the predictive capability of the model. So, if you just want predictions, the presence of multicollinearity does not affect your output.

One of the major problems caused by multicollinearity is that it leads to incorrect interpretations and provides wrong insights.

The coefficients of linear regression suggest the mean change in the target value if a feature is changed by one unit. So, if multicollinearity exists, this does not hold true as changing one feature will lead to changes in the correlated variable and consequent changes in the target variable. This leads to wrong insights and can produce hazardous results for a business.

Higher the value of VIF for a feature, more linearly correlated is that feature. Simply remove the feature with very high VIF value and re-train the model on the remaining dataset. Note here that the first column in the X matrix consists of all 1s. This is to incorporate the offset value for the regression line. Comparison between gradient descent and normal equation:. Clearly, if we have large training data, normal equation is not prefered for use.

What is Machine Learning and Why it matters You run your regression on different subsets of your data, and in each subset, the beta value for a certain variable varies wildly.

What could be the issue here? This case implies that the dataset is heterogeneous. So, to overcome this problem, the dataset should be clustered into different subsets, and then separate models should be built for each cluster. Another way to deal with this problem is to use non-parametric models, such as decision trees, which can deal with heterogeneous data quite efficiently.

This condition arises when there is a perfect correlation positive or negative between some variables. In this case, there is no unique value for the coefficients, and hence, the given condition arises.

Adjusted R 2 , just like R 2 , is a representative of the number of points lying around the regression line. That is, it shows how well the model is fitting the training data. The residual vs fitted value plot is used to see whether the predicted values and residuals have a correlation or not.

If the residuals are distributed normally, with a mean around the fitted value and a constant variance, our model is working fine; otherwise, there is some issue with the model. The most common problem that can be found when training the model over a large range of a dataset is heteroscedasticity this is explained in the answer below.

The presence of heteroscedasticity can be easily seen by plotting the residual vs fitted value curve. A random variable is said to be heteroscedastic when different subpopulations have different variabilities standard deviation. The existence of heteroscedasticity gives rise to certain problems in the regression analysis as the assumption says that error terms are uncorrelated and, hence, the variance is constant.

The presence of heteroscedasticity can often be seen in the form of a cone-like scatter plot for residual vs fitted values.

One of the basic assumptions of linear regression is that heteroscedasticity is not present in the data. There is no fixed procedure to overcome heteroscedasticity. However, there are some ways that may lead to a reduction of heteroscedasticity.

They are —. In simple terms, the variable is linearly dependent on some other variables. To see if linear regression is suitable for any given data, a scatter plot can be used.

If the relationship looks linear, we can go for a linear model. But if it is not the case, we have to apply some transformations to make the relationship linear.

Plotting the scatter plots is easy in case of simple or univariate linear regression. But in case of multivariate linear regression, two-dimensional pairwise scatter plots, rotating plots, and dynamic graphs can be plotted. Hypothesis testing can be carried out in linear regression for the following purposes:. Gradient descent is an optimisation algorithm.

Gradient descent works like a ball rolling down a graph ignoring the inertia. The ball moves along the direction of the greatest gradient and comes to rest at the flat surface minima. Gradient Descent starts with a random solution, and then based on the direction of the gradient, the solution is updated to the new value where the cost function has a lower value. The update is: Repeat until convergence. A linear regression model is quite easy to interpret. The model is of the following form: The significance of this model lies in the fact that one can easily interpret and understand the marginal changes and their consequences.

What is robust regression? A regression model should be robust in nature. This means that with changes in a few observations, the model should not change drastically. Also, it should not be much affected by the outliers. To overcome this problem, we can use the WLS Weighted Least Squares method to determine the estimators of the regression coefficients. Here, less weights are given to the outliers or high leverage points in the fitting, making these points less impactful.

Before fitting the model, one must be well aware of the data, such as what the trends, distribution, skewness, etc.

Regression line worksheet answers

Regression analysis is one of multiple data analysis techniques used in business and social sciences. The regression analysis technique is built on a number of statistical concepts including sampling, probability, correlation, distributions, central limit theorem, confidence intervals, z-scores, t-scores, hypothesis testing and more. The table gives the olympic pole vault records in the twentieth century. Answer key also includes questions Answer key only gives the answers This is only a sample worksheet. Instructions: Put the correct article a, an, the, or nothing into the paragraphs below. If an article is not needed

These short objective type questions with answers are very important for Board exams as well as competitive exams. These short solved questions or quizzes are provided by Gkseries. View Answer. Simple Linear regression will have high bias and low variance 2. Simple Linear regression will have low bias and high variance 3.

A way to simplify the choice is to define a range of models with an increasing number of variables, then select the best. Forward selection: Starting from a null model, include variables one at a time, minimizing the RSS at each step. Backward selection: Starting from the full model, eliminate variables one at a time, choosing the one with the largest p-value at each step. Mixed selection: Starting from some model, include variables one at a time, minimizing the RSS at each step. If the p-value for some variable goes beyond a threshold, eliminate that variable.

Linear Regression & Correlation - Practice Test Questions

Data scientists are relied upon to satisfy this need, but there is a lack of qualified candidates. If you want to be a data scientist, you are required to be prepared to impress considered companies with your knowledge. In addition to describing why data science is so valuable, you need to explain that you are technically skilled with all the concepts, frameworks, and applications. Below is the list of eight of the most common questions you can foresee in an interview and how to compose your answers.

Linear regression and modelling problems are presented along with their solutions at the bottom of the page. Also a linear regression calculator and grapher may be used to check answers and create more opportunities for practice. Free Mathematics Tutorials.

В шифровалке все в порядке! - Телефон не унимался. Джабба принялся устанавливать на место новый чип. Через минуту его усилия увенчались успехом, а телефон все звонил и звонил. Христа ради, Мидж.

 - Смотрите. Это просто бессмысленный набор букв… Слова застряли у него в горле, глаза расширились.  - О… Боже ты мой… Фонтейн тоже все понял. Брови его поползли вверх.

25 Machine Learning Interview Questions & Answers – Linear Regression

5 COMMENTS

Paz P.

REPLY

Which of the following can NOT be answered from a regression equation? For the next 4 questions: The simple linear regression equation can be written as.

Thibaut A.

REPLY

Regression Analysis Worksheets- Includes math lessons, 2 practice sheets, homework sheet, and a quiz!

Katharina T.

REPLY

The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~N(0,σ) are: a) α, β Use that equation to answer the next two questions:

Stagboytesra

REPLY

It is a common practice to test data science aspirants on commonly used machine learning algorithms in interviews.

Beymacecel

REPLY

Chapter 5. LINEAR REGRESSION LIFE TIME MODEL. 7. 5%?. Solution. Since the Solution. The question is solved by considering the following R-​output.

LEAVE A COMMENT