目录
序言
无正则化
1、主函数:
2、sigmoid函数
3、plotData绘图
4、costFunction代价函数
5、predict预测函数
6、submit函数
正则化
1、主函数
2、plotData绘图结果展示
2、costFunctionReg代价函数
3、plotDecisionBoundary函数
4、mapFeature函数
5、说明
序言
逻辑回归是解决分类问题的一种分类器,是在线性回归的基础上进行了非线性的调整,即引入了sigmoid函数。虽说是对线性回归的调整,但是并不是说二者均是解决同一类问题,线性回归解决的是预测问题,其代价函数是采用最小二乘法进行求解,而逻辑回归的代价函数则是引入了概率的思想。有关逻辑回归的详细内容请参考如下介绍,博客内容仅供参考,如有不当之处欢迎大家在评论区批评指正。
无正则化
1、主函数:
首先、关于逻辑回归的模型如下图所示。输入的每一项与对应权重相乘,得到的结果进入sigmoid函数进行非线性处理,将输出限制在(0,1)范围内。再利用输出值和实际值求出代价函数,再对代价函数求偏导求解出梯度grad进而更新权重theta。
%% Machine Learning Online Class - Exercise 2: Logistic Regression % % Instructions % ------------ % % This file contains code that helps you get started on the logistic % regression exercise. You will need to complete the following functions % in this exericse: % % sigmoid.m % costFunction.m % predict.m % costFunctionReg.m % % For this exercise, you will not need to change any code in this file, % or any other files other than those mentioned above. % %% Initialization clear ; close all; clc %% Load Data % The first two columns contains the exam scores and the third column % contains the label. data = load ('ex2data1.txt'); % X = data(:,[1,2]); % y = data(:,3); X = data(:,[1,2])'; y = data(:,3)'; %X = data(:, [1, 2]); y = data(:, 3); %% ==================== Part 1: Plotting ==================== % We start the exercise by first plotting the data to understand the % the problem we are working with. fprintf(['Plotting data with + indicating (y = 1) examples and o ' ... 'indicating (y = 0) examples.\n']); plotData(X, y); % Put some labels hold on; % Labels and Legend xlabel('Exam 1 score') ylabel('Exam 2 score') % Specified in plot order legend('Admitted', 'Not admitted') hold off; fprintf('\nProgram paused. Press enter to continue.\n'); pause; %% ============ Part 2: Compute Cost and Gradient ============ % In this part of the exercise, you will implement the cost and gradient % for logistic regression. You neeed to complete the code in % costFunction.m % Setup the data matrix appropriately, and add ones for the intercept term [m, n] = size(X); % Add intercept term to x and X_test X = [ones(1, n) ;X];%加偏置项 %Y = [y0;y1;y2;.......] % Initialize fitting parameters initial_theta = zeros(1,m + 1);%初始化 偏置项置为全0 1 X n+1 % Compute and display initial cost and gradient [cost, grad] = costFunction(initial_theta, X, y); fprintf('Cost at initial theta (zeros): %f\n', cost); fprintf('Expected cost (approx): 0.693\n'); fprintf('Gradient at initial theta (zeros): \n'); fprintf(' %f \n', grad); fprintf('Expected gradients (approx):\n -0.1000\n -12.0092\n -11.2628\n'); % Compute and display cost and gradient with non-zero theta test_theta = [-24,0.2,0.2]; [cost, grad] = costFunction(test_theta, X, y); fprintf('\nCost at test theta: %f\n', cost); fprintf('Expected cost (approx): 0.218\n'); fprintf('Gradient at test theta: \n'); fprintf(' %f \n', grad); fprintf('Expected gradients (approx):\n 0.043\n 2.566\n 2.647\n'); fprintf('\nProgram paused. Press enter to continue.\n'); pause; %% ============= Part 3: Optimizing using fminunc ============= % In this exercise, you will use a built-in function (fminunc) to find the % optimal parameters theta. % Set options for fminunc options = optimset('GradObj', 'on', 'MaxIter', 400); % Run fminunc to obtain the optimal theta % This function will return theta and the cost [theta, cost] = ... fminunc(@(t)(costFunction(t, X, y)), initial_theta, options); % Print theta to screen fprintf('Cost at theta found by fminunc: %f\n', cost); fprintf('Expected cost (approx): 0.203\n'); fprintf('theta: \n'); fprintf(' %f \n', theta); fprintf('Expected theta (approx):\n'); fprintf(' -25.161\n 0.206\n 0.201\n'); % Plot Boundary plotDecisionBoundary(theta, X, y); % Put some labels hold on; % Labels and Legend xlabel('Exam 1 score') ylabel('Exam 2 score') % Specified in plot order legend('Admitted', 'Not admitted') hold off; fprintf('\nProgram paused. Press enter to continue.\n'); pause; %% ============== Part 4: Predict and Accuracies ============== % After learning the parameters, you'll like to use it to predict the outcomes % on unseen data. In this part, you will use the logistic regression model % to predict the probability that a student with score 45 on exam 1 and % score 85 on exam 2 will be admitted. % % Furthermore, you will compute the training and test set accuracies of % our model. % % Your task is to complete the code in predict.m % Predict probability for a student with score 45 on exam 1 % and score 85 on exam 2 prob = sigmoid([1 45 85] * theta'); fprintf(['For a student with scores 45 and 85, we predict an admission ' ... 'probability of %f\n'], prob); fprintf('Expected value: 0.775 +/- 0.002\n\n'); % Compute accuracy on our training set p = predict(theta, X); fprintf('Train Accuracy: %f\n', mean(double(p == y)) * 100); fprintf('Expected accuracy (approx): 89.0\n'); fprintf('\n');
初始化theta为0的时候计算代价和梯度结果如下所示,红色为实验所得结果,蓝色为参考值。
分类结果示意图如下所示:
正确率为:
2、sigmoid函数
sigmoid函数也称激活函数,当一个神经元的激活函数是一个 Sigmoid函数时,这个单元的输出保证总是介于0和1之间。此外,由于 Sigmoid是一个非线性函数,这个单元的输出将是一个非线性函数的加权和的输入。这种以Sigmoid函数为激活函数的神经元被称为sigmoid unit ,其图像如下图所示。
function g = sigmoid(z) %SIGMOID Compute sigmoid function % g = SIGMOID(z) computes the sigmoid of z. % You need to return the following variables correctly % ====================== YOUR CODE HERE ====================== % Instructions: Compute the sigmoid of each value of z (z can be a matrix, % vector or scalar). g=1 ./ (1 + exp(-z)); % ============================================================= end
3、plotData绘图
关于plotData函数,需要先对数据进行处理。根据标签将数据集处理,利用find函数找出对应类别数据集的索引下标将数据集分为pos和neg两类,最后绘制两类数据的图像,如下图所示。
function plotData(X, y) %PLOTDATA Plots the data points X and y into a new figure % PLOTDATA(x,y) plots the data points with + for the positive examples % and o for the negative examples. X is assumed to be a Mx2 matrix. % Create New Figure figure; hold on; % ====================== YOUR CODE HERE ====================== % Instructions: Plot the positive and negative examples on a % 2D plot, using the option 'k+' for the positive % examples and 'ko' for the negative examples. % %data数据说明:第一列横坐标,第二列纵坐标,第三列标签标志(1:pos;2:neg) %找出y = 0和1的下标 (y为标签标志) pos= find(y==1); neg = find(y==0); %画出对应点(x,y) plot(X(1,pos), X(2,pos), 'k+','LineWidth', 2, 'MarkerSize', 7); plot(X(1,neg), X(2,neg), 'ko', 'MarkerFaceColor', 'y', 'MarkerSize', 7); % plot(X(1,pos), X(2,pos), 'k+','LineWidth', 2, 'MarkerSize', 7); % plot(X(1,neg), X(2,neg), 'ko', 'MarkerFaceColor', 'y', 'MarkerSize', 7); % ========================================================================= hold off; end
4、costFunction代价函数
代价函数,其特点是:实际值为 1 且预测值也为 1 时误差为 0 ,当实际值为 1 预测值不为1时,误差随预测值减小而增大。当实际值为 0 且预测值也为 0 时,误差为 0 ,当实际值为 0 但预测值不为 0 时,误差随预测值增大而增大,此过程可以参考sigmoid函数图像理解。
计算过程也较为简单,只需求出预测值h_theta,带入如下公式即可得出代价J。得到J后,我们便可以通过梯度下降法求出梯度grad。
有关如何求导便不在此过多介绍,有兴趣的可以自行推导。
无正则化的代价函数部分,并没有易错点,不过在编写代码的时候仍需注意,避免粗心导致报错。
function [J, grad] = costFunction(theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly % J = 0; % grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta % % Note: grad should have the same dimensions as theta h_theta = sigmoid(theta*X); J = -sum(y.*log(h_theta)+(1-y).*log((1-h_theta)))/m; %%%%%%%%%%%%%%%%%方法二%%%%%%%%%%%%%%%%%% %不用sum 矩阵乘法包含有求和,可以不需要sum 和点乘的方式;因此将y = y'; % y = y';%100 * 1 log(h_theta) 1*100 % J = -(log(h_theta)*y+log((1-h_theta))*(1-y))/m; %%%%%%%%%%%%%%%%二者时间复杂度比较??????%%%%%%% grad = sum((h_theta-y).*X,2)/m; %更新theta % theta = theta - alpha.*grad; % ============================================================= end
5、predict预测函数
预测函数,即先根据预测值对预测输出的数据集进行分类,分类标准即sigmoid函数的区分标准,以0.5为界,大于0.5则判为1,小于0.5判为0。此过程之后,我们便可得到数据集的逻辑输出,非真实的线性组合输出,经过sigmoid函数后,输出值便变成了逻辑值中的0和1,这可以看出逻辑回归实际上是一个分类器,而非做预测。
function p = predict(theta, X) %PREDICT Predict whether the label is 0 or 1 using learned logistic %regression parameters theta % p = PREDICT(theta, X) computes the predictions for X using a % threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1) % m = size(X, 1); % Number of training examples % % % You need to return the following variables correctly % p = zeros(m, 1); % ====================== YOUR CODE HERE ====================== % Instructions: Complete the following code to make predictions using % your learned logistic regression parameters. % You should set p to a vector of 0's and 1's % Matrix = sigmoid(theta*X); num_cols= size(Matrix,2); for i = 1 : num_cols if Matrix(1,i)>=0.5 p(i) = 1; else p(i) = 0; end end % ========================================================================= end
6、submit函数
function submit() addpath('./lib'); conf.assignmentSlug = 'logistic-regression'; conf.itemName = 'Logistic Regression'; conf.partArrays = { ... { ... '1', ... { 'sigmoid.m' }, ... 'Sigmoid Function', ... }, ... { ... '2', ... { 'costFunction.m' }, ... 'Logistic Regression Cost', ... }, ... { ... '3', ... { 'costFunction.m' }, ... 'Logistic Regression Gradient', ... }, ... { ... '4', ... { 'predict.m' }, ... 'Predict', ... }, ... { ... '5', ... { 'costFunctionReg.m' }, ... 'Regularized Logistic Regression Cost', ... }, ... { ... '6', ... { 'costFunctionReg.m' }, ... 'Regularized Logistic Regression Gradient', ... }, ... }; conf.output = @output; submitWithConfiguration(conf); end function out = output(partId, auxstring) % Random Test Cases X = [ones(20,1) (exp(1) * sin(1:1:20))' (exp(0.5) * cos(1:1:20))']; y = sin(X(:,1) + X(:,2)) > 0; if partId == '1' out = sprintf('%0.5f ', sigmoid(X)); elseif partId == '2' out = sprintf('%0.5f ', costFunction([0.25 0.5 -0.5]', X, y)); elseif partId == '3' [cost, grad] = costFunction([0.25 0.5 -0.5]', X, y); out = sprintf('%0.5f ', grad); elseif partId == '4' out = sprintf('%0.5f ', predict([0.25 0.5 -0.5]', X)); elseif partId == '5' out = sprintf('%0.5f ', costFunctionReg([0.25 0.5 -0.5]', X, y, 0.1)); elseif partId == '6' [cost, grad] = costFunctionReg([0.25 0.5 -0.5]', X, y, 0.1); out = sprintf('%0.5f ', grad); end end
正则化
有关正则化过拟合欠拟合问题将在后续博客中讨论,本文仅讨论如何对代价函数进行正则化。
1、主函数
%% Machine Learning Online Class - Exercise 2: Logistic Regression % % Instructions % ------------ % % This file contains code that helps you get started on the second part % of the exercise which covers regularization with logistic regression. % % You will need to complete the following functions in this exericse: % % sigmoid.m % costFunction.m % predict.m % costFunctionReg.m % % For this exercise, you will not need to change any code in this file, % or any other files other than those mentioned above. % %% Initialization clear ; close all; clc %% Load Data % The first two columns contains the X values and the third column % contains the label (y). data = load('ex2data2.txt'); X = data(:, [1, 2])'; y = data(:, 3)'; plotData(X, y); % Put some labels hold on; % Labels and Legend xlabel('Microchip Test 1') ylabel('Microchip Test 2') % Specified in plot order legend('y = 1', 'y = 0') hold off; %% =========== Part 1: Regularized Logistic Regression ============ % In this part, you are given a dataset with data points that are not % linearly separable. However, you would still like to use logistic % regression to classify the data points. % % To do so, you introduce more features to use -- in particular, you add % polynomial features to our data matrix (similar to polynomial % regression). % % Add Polynomial Features % Note that mapFeature also adds a column of ones for us, so the intercept % term is handled X = mapFeature(X(1,:), X(2,:));%已加偏置项 % Initialize fitting parameters initial_theta = zeros(1,size(X, 1)); % Set regularization parameter lambda to 1 lambda = 1; % Compute and display initial cost and gradient for regularized logistic % regression [cost, grad] = costFunctionReg(initial_theta, X, y, lambda); fprintf('Cost at initial theta (zeros): %f\n', cost); fprintf('Expected cost (approx): 0.693\n'); fprintf('Gradient at initial theta (zeros) - first five values only:\n'); fprintf(' %f \n', grad(1:5)); fprintf('Expected gradients (approx) - first five values only:\n'); fprintf(' 0.0085\n 0.0188\n 0.0001\n 0.0503\n 0.0115\n'); fprintf('\nProgram paused. Press enter to continue.\n'); pause; % Compute and display cost and gradient % with all-ones theta and lambda = 10 test_theta = ones(1,size(X,1));%1*28 [cost, grad] = costFunctionReg(test_theta, X, y, 10); fprintf('\nCost at test theta (with lambda = 10): %f\n', cost); fprintf('Expected cost (approx): 3.16\n'); fprintf('Gradient at test theta - first five values only:\n'); fprintf(' %f \n', grad(1:5)); fprintf('Expected gradients (approx) - first five values only:\n'); fprintf(' 0.3460\n 0.1614\n 0.1948\n 0.2269\n 0.0922\n'); fprintf('\nProgram paused. Press enter to continue.\n'); pause; %% ============= Part 2: Regularization and Accuracies ============= % Optional Exercise: % In this part, you will get to try different values of lambda and % see how regularization affects the decision coundart % % Try the following values of lambda (0, 1, 10, 100). % % How does the decision boundary change when you vary lambda? How does % the training set accuracy vary? % % Initialize fitting parameters initial_theta = zeros(1,size(X, 1)); % Set regularization parameter lambda to 1 (you should vary this) lambda = 1; % Set Options options = optimset('GradObj', 'on', 'MaxIter', 400); % Optimize [theta, J, exit_flag] = ... fminunc(@(t)(costFunctionReg(t, X, y, lambda)), initial_theta, options); % Plot Boundary plotDecisionBoundary(theta, X, y); hold on; title(sprintf('lambda = %g', lambda)) % Labels and Legend xlabel('Microchip Test 1') ylabel('Microchip Test 2') legend('y = 1', 'y = 0', 'Decision boundary') hold off; % Compute accuracy on our training set p = predict(theta, X); fprintf('Train Accuracy: %f\n', mean(double(p == y)) * 100); fprintf('Expected accuracy (with lambda = 1): 83.1 (approx)\n');
2、plotData绘图结果展示
2、costFunctionReg代价函数
正则化处理的代价函数,需要注意的地方有两个。一个是theta0是不参与正则化的,所以在计算代价函数J的时候不应将其进算在内。第二个地方是,代价函数对theta求偏导的时候,需要分成两个部分进行:偏置项部分和剩余部分。在对偏置项的权重求偏导时,由于偏置项为常数,求偏导后无X(i)这一项。另外,值得注意的是theta0_grad的值为一个数!因此,在进行计算的时候,一定要注意各参数的维度。
function [J, grad] = costFunctionReg(theta, X, y, lambda) %COSTFUNCTIONREG Compute cost and gradient for logistic regression with regularization % J = COSTFUNCTIONREG(theta, X, y, lambda) computes the cost of using % theta as the parameter for regularized logistic regression and the % gradient of the cost w.r.t. to the parameters. %theta 28*1====1*28 X 28*118 y 1*118 % theta = theta'; % Initialize some useful values m = length(y); % number of training examples % You need to return the following variables correctly % J = 0; % grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta % h_theta = sigmoid(theta*X); %%%theta0 即偏置量权重不必正则化 %theta0_grad = sum((h_theta(1,1)-y(1,1))*X(1,:)')/m; theta0_grad = ((h_theta-y)*X(1,:)')/m; %theta_grad = sum((h_theta-y).*X(2:end,:)')/m +lambda.*theta(1,2:end)/m; theta_grad = ((h_theta-y)*X(2:end,:)')/m +lambda.*theta(1,2:end)/m; grad = [theta0_grad theta_grad]; alpha = 0.003; theta = theta - alpha.*grad; % grad = sum((h_theta-y).*X,2)/m ; J = -sum(y.*log(h_theta)+(1-y).*log((1-h_theta)))/m + lambda.*sum(theta(1,2:end).^2)/(2*m); % ============================================================= end
结果展示,红色框线表示训练结果,蓝色框线表示参考结果。本实验所得结果与参考值差异不大,表明代码无误。
当正则化因子为10的时候,所得的代价和梯度结果。
3、plotDecisionBoundary函数
此函数作用就是画出边界线,用于区分两类数据。
function plotDecisionBoundary(theta, X, y) %PLOTDECISIONBOUNDARY Plots the data points X and y into a new figure with %the decision boundary defined by theta % PLOTDECISIONBOUNDARY(theta, X,y) plots the data points with + for the % positive examples and o for the negative examples. X is assumed to be % a either % 1) Mx3 matrix, where the first column is an all-ones column for the % intercept. % 2) MxN, N>3 matrix, where the first column is all-ones % Plot Data plotData(X(2:3,:), y); hold on if size(X,1) <= 3 % Only need 2 points to define a line, so choose two endpoints plot_x = [min(X(2,:))-2, max(X(2,:))+2]; % Calculate the decision boundary line plot_y = (-1./theta(1,3)).*(theta(1,2).*plot_x + theta(1,1)); % Plot, and adjust axes for better viewing plot(plot_x, plot_y) % Legend, specific for the exercise legend('Admitted', 'Not admitted', 'Decision Boundary') axis([30, 100, 30, 100]) else % Here is the grid range u = linspace(-1, 1.5, 50); v = linspace(-1, 1.5, 50); z = zeros(length(u), length(v)); % Evaluate z = theta*x over the grid for i = 1:length(u) for j = 1:length(v) %z(i,j) = mapFeature(u(1,i), v(1,j))*theta;%%%%%%%%%? w = mapFeature(u(1,i), v(1,j));%%test 28*1 z(i,j) = theta*mapFeature(u(1,i), v(1,j));%theta 118*1 end end z = z'; % important to transpose z before calling contour % Plot z = 0 % Notice you need to specify the range [0, 0] contour(u, v, z, [0, 0], 'LineWidth', 2) end hold off end
4、mapFeature函数
此函数作用是将输入参数进行排列组合,组合出更多的特征。
function out = mapFeature(X1, X2) % MAPFEATURE Feature mapping function to polynomial features % % MAPFEATURE(X1, X2) maps the two input features % to quadratic features used in the regularization exercise. % % Returns a new feature array with more features, comprising of % X1, X2, X1.^2, X2.^2, X1*X2, X1*X2.^2, etc.. % % Inputs X1, X2 must be the same size % degree = 6; out = ones(size(X1(1,:))); for i = 1:degree for j = 0:i out(end+1, :) = (X1.^(i-j)).*(X2.^j);%28*? end end end
5、说明
两个实例会用到一些共同的函数,因此在第二个实例中并未重复列出。如果需要单独的实例二的朋友,可以根据主函数中的函数按需自取。
正则化后的正确率如下: