Data Science Asked on February 22, 2021
Let’s suppose that the stock value of various companies is the target of my models.
I have some “internal” predictors e.g. yearly sales of each company, sum of salaries at each company etc.
I have some “external” predictors e.g. geographical position of each company (latitude & longitude), population in the area in which each company operates etc.
Therefore, each observation at my dataset is about the stock value of a company along with its internal and external predictors.
The purpose of my project is to understand how each of a company’s internal predictors affect in a very specific way the stock value of the respective company.
In simpler words, I want to get some accurate weights for the internal predictors which show me how exactly they affect the stock value of the respective company.
However, because there is relatively high multicolinearity between some of the internal predictors as a result I am not really getting very accurate weights for each one of them.
There may also be between the internal and the external predictors but I do not consider this as a problem because I think that you should account for all external predictors when taking the weight of the internal predictors.
However, I am not sure that I have to put all the internal predictors at the same model together because for example I do not want the weight of the yearly sales of a company to be modified because of the presence of other internal predictors such as the sum of salaries at this company.
In this regard, I am starting to think that the best way to go is to have multiple different models where each one of them has to do with one internal predictors separately but at every case all the external predictors.
Does this make sense?
Do you have any better idea?
P.S.
I just found a post which is quite similar to my line of reasoning: https://www.researchgate.net/post/Is_building_separate_models_a_solution_to_multi-collinearity.
If you build separate models, you are making the internal predictors/features independent of each other. This will cause a lot of your internals predictors to get really high weights which would probably not be the case if you added them together in the same model. An obvious method is to remove correlated predictors and then see what weights you get.
Answered by Atif Hassan on February 22, 2021
One way is use dimension reduction methods like pca to remove this.Or you could use regularization method like ridge.
Answered by nan hu on February 22, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP