Understanding minimizing cost correctlyUnderstanding Locally Weighted Linear RegressionUnderstanding Logistic Regression Cost functionCost function for Ordinal Regression using neural networksCustom c++ LSTM slows down at 0.36 cost is usual?Policy Gradient Methods - ScoreFunction & Log(policy)How to Define a Cost Fucntion?Logistic regression cost functionCost function in linear regressionML / Multivariable cost minimization problems / approach summary?Loss function minimizing by pushing precision and recall to 0

Brake pads destroying wheels

Using Past-Perfect interchangeably with the Past Continuous

Can other pieces capture a threatening piece and prevent a checkmate?

My friend is being a hypocrite

Are dual Irish/British citizens bound by the 90/180 day rule when travelling in the EU after Brexit?

How are passwords stolen from companies if they only store hashes?

Suggestions on how to spend Shaabath (constructively) alone

Turning a hard to access nut?

Bash - pair each line of file

Practical application of matrices and determinants

Light propagating through a sound wave

What is the significance behind "40 days" that often appears in the Bible?

Do US professors/group leaders only get a salary, but no group budget?

In Aliens, how many people were on LV-426 before the Marines arrived​?

Why is indicated airspeed rather than ground speed used during the takeoff roll?

Does the attack bonus from a Masterwork weapon stack with the attack bonus from Masterwork ammunition?

Do native speakers use "ultima" and "proxima" frequently in spoken English?

Can a medieval gyroplane be built?

I seem to dance, I am not a dancer. Who am I?

What does "Four-F." mean?

Variable completely messes up echoed string

What does Jesus mean regarding "Raca," and "you fool?" - is he contrasting them?

Matrix using tikz package

Does .bashrc contain syntax errors?



Understanding minimizing cost correctly


Understanding Locally Weighted Linear RegressionUnderstanding Logistic Regression Cost functionCost function for Ordinal Regression using neural networksCustom c++ LSTM slows down at 0.36 cost is usual?Policy Gradient Methods - ScoreFunction & Log(policy)How to Define a Cost Fucntion?Logistic regression cost functionCost function in linear regressionML / Multivariable cost minimization problems / approach summary?Loss function minimizing by pushing precision and recall to 0













2












$begingroup$


I cannot wrap my head around this simple concept.



Suppose we have a linear regression, and there is a single parameter theta to be optimized (for simplicity purposes):



$h(x) = theta cdot x$



The error cost function could be defined as $J(theta) = frac1m cdot sum (h(x) - y(x)) ^ 2$, for each $x$.



Then, theta would be updated as:



$theta = theta - alphacdot frac1m cdot sum (h(x) - y(x)) cdot x$, for each $x$.



From my understanding the multiplier after the alpha term is the derivative of the error cost function $J$. This term tells us the direction to head in, in order to arrive at the minimum making a small step at a time. I understand the concept of "hill climbing" correctly, at least I think.



Here is where I don't seem to wrap my head around:



If the form of the error function is known (like in our case: we could visually plot the function if we take enough values of theta and plug them in the model), why can't we take the first derivative and set it to zero (partial derivative if the function has multiple thetas). This way we would have all the minimums of the function. Then with the second derivative, we could determine whether it's a min or a max.



I've seen this done in calculus for simple functions like $y = x^2 + 5x + 2$ (may years ago, maybe I am wrong), so what is stopping us from doing the same thing here?



Sorry for asking such a silly question.



Thank you.










share|improve this question









New contributor




zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    2












    $begingroup$


    I cannot wrap my head around this simple concept.



    Suppose we have a linear regression, and there is a single parameter theta to be optimized (for simplicity purposes):



    $h(x) = theta cdot x$



    The error cost function could be defined as $J(theta) = frac1m cdot sum (h(x) - y(x)) ^ 2$, for each $x$.



    Then, theta would be updated as:



    $theta = theta - alphacdot frac1m cdot sum (h(x) - y(x)) cdot x$, for each $x$.



    From my understanding the multiplier after the alpha term is the derivative of the error cost function $J$. This term tells us the direction to head in, in order to arrive at the minimum making a small step at a time. I understand the concept of "hill climbing" correctly, at least I think.



    Here is where I don't seem to wrap my head around:



    If the form of the error function is known (like in our case: we could visually plot the function if we take enough values of theta and plug them in the model), why can't we take the first derivative and set it to zero (partial derivative if the function has multiple thetas). This way we would have all the minimums of the function. Then with the second derivative, we could determine whether it's a min or a max.



    I've seen this done in calculus for simple functions like $y = x^2 + 5x + 2$ (may years ago, maybe I am wrong), so what is stopping us from doing the same thing here?



    Sorry for asking such a silly question.



    Thank you.










    share|improve this question









    New contributor




    zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      2












      2








      2





      $begingroup$


      I cannot wrap my head around this simple concept.



      Suppose we have a linear regression, and there is a single parameter theta to be optimized (for simplicity purposes):



      $h(x) = theta cdot x$



      The error cost function could be defined as $J(theta) = frac1m cdot sum (h(x) - y(x)) ^ 2$, for each $x$.



      Then, theta would be updated as:



      $theta = theta - alphacdot frac1m cdot sum (h(x) - y(x)) cdot x$, for each $x$.



      From my understanding the multiplier after the alpha term is the derivative of the error cost function $J$. This term tells us the direction to head in, in order to arrive at the minimum making a small step at a time. I understand the concept of "hill climbing" correctly, at least I think.



      Here is where I don't seem to wrap my head around:



      If the form of the error function is known (like in our case: we could visually plot the function if we take enough values of theta and plug them in the model), why can't we take the first derivative and set it to zero (partial derivative if the function has multiple thetas). This way we would have all the minimums of the function. Then with the second derivative, we could determine whether it's a min or a max.



      I've seen this done in calculus for simple functions like $y = x^2 + 5x + 2$ (may years ago, maybe I am wrong), so what is stopping us from doing the same thing here?



      Sorry for asking such a silly question.



      Thank you.










      share|improve this question









      New contributor




      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I cannot wrap my head around this simple concept.



      Suppose we have a linear regression, and there is a single parameter theta to be optimized (for simplicity purposes):



      $h(x) = theta cdot x$



      The error cost function could be defined as $J(theta) = frac1m cdot sum (h(x) - y(x)) ^ 2$, for each $x$.



      Then, theta would be updated as:



      $theta = theta - alphacdot frac1m cdot sum (h(x) - y(x)) cdot x$, for each $x$.



      From my understanding the multiplier after the alpha term is the derivative of the error cost function $J$. This term tells us the direction to head in, in order to arrive at the minimum making a small step at a time. I understand the concept of "hill climbing" correctly, at least I think.



      Here is where I don't seem to wrap my head around:



      If the form of the error function is known (like in our case: we could visually plot the function if we take enough values of theta and plug them in the model), why can't we take the first derivative and set it to zero (partial derivative if the function has multiple thetas). This way we would have all the minimums of the function. Then with the second derivative, we could determine whether it's a min or a max.



      I've seen this done in calculus for simple functions like $y = x^2 + 5x + 2$ (may years ago, maybe I am wrong), so what is stopping us from doing the same thing here?



      Sorry for asking such a silly question.



      Thank you.







      linear-regression cost-function






      share|improve this question









      New contributor




      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 15 hours ago









      Siong Thye Goh

      1,302418




      1,302418






      New contributor




      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 15 hours ago









      zafirzaryazafirzarya

      132




      132




      New contributor




      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      zafirzarya is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Consider differentiating this $$nabla_theta|Xtheta -y|^2=2X^T(Xtheta -y)=0$$



          Hence solving this, would give us $$X^TXtheta =X^Ty$$



          Solving this would give us the optimal solution theoretically. However, numerical stability is an issue and also don't forget computational complexity. The complexity to solve a linear system is cubic.



          Also, sometimes, we do not even know even have a closed form, a gradient based approach can be more applicable.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
            $endgroup$
            – zafirzarya
            15 hours ago










          • $begingroup$
            I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
            $endgroup$
            – Siong Thye Goh
            15 hours ago










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          zafirzarya is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47466%2funderstanding-minimizing-cost-correctly%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          Consider differentiating this $$nabla_theta|Xtheta -y|^2=2X^T(Xtheta -y)=0$$



          Hence solving this, would give us $$X^TXtheta =X^Ty$$



          Solving this would give us the optimal solution theoretically. However, numerical stability is an issue and also don't forget computational complexity. The complexity to solve a linear system is cubic.



          Also, sometimes, we do not even know even have a closed form, a gradient based approach can be more applicable.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
            $endgroup$
            – zafirzarya
            15 hours ago










          • $begingroup$
            I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
            $endgroup$
            – Siong Thye Goh
            15 hours ago















          2












          $begingroup$

          Consider differentiating this $$nabla_theta|Xtheta -y|^2=2X^T(Xtheta -y)=0$$



          Hence solving this, would give us $$X^TXtheta =X^Ty$$



          Solving this would give us the optimal solution theoretically. However, numerical stability is an issue and also don't forget computational complexity. The complexity to solve a linear system is cubic.



          Also, sometimes, we do not even know even have a closed form, a gradient based approach can be more applicable.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
            $endgroup$
            – zafirzarya
            15 hours ago










          • $begingroup$
            I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
            $endgroup$
            – Siong Thye Goh
            15 hours ago













          2












          2








          2





          $begingroup$

          Consider differentiating this $$nabla_theta|Xtheta -y|^2=2X^T(Xtheta -y)=0$$



          Hence solving this, would give us $$X^TXtheta =X^Ty$$



          Solving this would give us the optimal solution theoretically. However, numerical stability is an issue and also don't forget computational complexity. The complexity to solve a linear system is cubic.



          Also, sometimes, we do not even know even have a closed form, a gradient based approach can be more applicable.






          share|improve this answer









          $endgroup$



          Consider differentiating this $$nabla_theta|Xtheta -y|^2=2X^T(Xtheta -y)=0$$



          Hence solving this, would give us $$X^TXtheta =X^Ty$$



          Solving this would give us the optimal solution theoretically. However, numerical stability is an issue and also don't forget computational complexity. The complexity to solve a linear system is cubic.



          Also, sometimes, we do not even know even have a closed form, a gradient based approach can be more applicable.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 15 hours ago









          Siong Thye GohSiong Thye Goh

          1,302418




          1,302418







          • 1




            $begingroup$
            Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
            $endgroup$
            – zafirzarya
            15 hours ago










          • $begingroup$
            I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
            $endgroup$
            – Siong Thye Goh
            15 hours ago












          • 1




            $begingroup$
            Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
            $endgroup$
            – zafirzarya
            15 hours ago










          • $begingroup$
            I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
            $endgroup$
            – Siong Thye Goh
            15 hours ago







          1




          1




          $begingroup$
          Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
          $endgroup$
          – zafirzarya
          15 hours ago




          $begingroup$
          Thank you for replying. However, I am not that mathematically literate to understand your answer. Is there a simpler answer?
          $endgroup$
          – zafirzarya
          15 hours ago












          $begingroup$
          I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
          $endgroup$
          – Siong Thye Goh
          15 hours ago




          $begingroup$
          I found an answer in MSE to illustrate why computing $X^TX$ is bad. Most approaches that aim at directly solving the normal equation is more expensive than a gradient based approach. Also such gradient based approach have been adapted to a sampling based approach as well known as stochastic gradient descent that can handle very big data.
          $endgroup$
          – Siong Thye Goh
          15 hours ago










          zafirzarya is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          zafirzarya is a new contributor. Be nice, and check out our Code of Conduct.












          zafirzarya is a new contributor. Be nice, and check out our Code of Conduct.











          zafirzarya is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47466%2funderstanding-minimizing-cost-correctly%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How does Billy Russo acquire his 'Jigsaw' mask? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara Favourite questions and answers from the 1st quarter of 2019Why does Bane wear the mask?Why does Kylo Ren wear a mask?Why did Captain America remove his mask while fighting Batroc the Leaper?How did the OA acquire her wisdom?Is Billy Breckenridge gay?How does Adrian Toomes hide his earnings from the IRS?What is the state of affairs on Nootka Sound by the end of season 1?How did Tia Dalma acquire Captain Barbossa's body?How is one “Deemed Worthy”, to acquire the Greatsword “Dawn”?How did Karen acquire the handgun?

          Личност Атрибути на личността | Литература и източници | НавигацияРаждането на личносттаредактиратередактирате

          A sequel to Domino's tragic life Why Christmas is for Friends Cold comfort at Charles' padSad farewell for Lady JanePS Most watched News videos