Saturday, February 25, 2017

TCP Client-Server Architecture Simple Demo with Java

TCP Client-Server Architecture Simple Demo, using java.

CSSoldier back again with a new post to help the fellow coders. Today we have some simple java code for a TCP connection. For those that do not know, TCP stands for Transmission Control Protocol. TCP is the most widely used protocol for transferring packets of information across the internet. TCP is a reliable connection protocol that provides ordered and error-checked delivery of information. Needless to say, it is pretty damn important. More information on TCP can be found here: https://en.wikipedia.org/wiki/Transmission_Control_Protocol

This works by way of a Client-Server Architecture. We initiate a point-to-point client-server connection using this Transmission Control Protocol. TCP guarantees the delivery of the data it transmits in the form of "packets". If packets are lost or damaged, TCP will resend the data until it verifies that packets have been successfully transmitted. This is important when you want to make sure your "I love you" message correctly makes it to your loved one....before they think you no longer care and you end up in the dog house. Anyway, moving on...
When establishing connectivity, the client and server each bind a "socket" to their end of the connection. Once a connection has been established, the client and server both read from, and write to, the socket when communicating. Its like picking up the phone, dialing a number and waiting for the connection, when the person picks up, you each listen for the other's communication. You can think of this two way communication with reliability checks like this: As one person talks, the other person will acknowledge what he/she says by saying "ok", "uh huh", "sure", or "I hear ya". If someone cuts out while saying something, you may say something like "I'm sorry, I didn't catch that last bit." Then the person will re-say what they previously said.

Java uses the SeverSocket and Socket classes from the java.net library to accomplish this combination of "IPaddress and port number".

Here is a summary of the Client-Server Architecture:

1. Create a Socket or ServerSocket object and open it over a specified port and IP address or host name.
2. Instantiate InputStreamReader, BufferedReader, and PrintStream objects to stream data to and from each socket. (its like a chaining together of objects)
3. Read from and Write to the stream between the sockets using their agreed-upon protocol.
4. Close the input and output streams.
5. Close the Socket or ServerSocket Objects.

Here is an Illustration:



This code repository can be found on github here: https://github.com/GettinDatFoShow/javaAppLearning.git
Hope you enjoyed today's information and it helps you in your Computer Science or IT learning.
Have a great day and remember, #SudoAptGetUpdateYourBrain

resource used: https://www.youtube.com/watch?v=6G_W54zuadg

Here is each class: (remember that the server socket code must be ran first, obviously)

ServerSock.java:___________________________________________________________

/*
 * This is a Server Socket listening on port 1776 (I call it the freedom port)
 * Once a connection is heard from a client, a socket opened and accepted.
 * from the socket instance that is created, there are a few properties available.
 * The .getInetAdress() from the socket returns an InetAddress instance of the client on the
 * other side of the socket. we can then call the .getHostAddress on the InetAddress object
 * and obtain a string of the clients IPaddress.
 * Then, an InputStreamReader object is created using the socket as its parameter.
 * After this, a BufferedReader object is created using the InputStreamReader object as the parameter.
 * The socket is now ready listen for messages sent through the socket from the client.
 * there is a while loop once the readline() function is called appon the bufferedReader object
 * becuase the readline() will stop at the \n character. If it doesnt keep looping it will not be
 * able to read out multiple lines from the client.
 * Once the client connects the server sends a message back with its connecting IPaddress,
 * alonge with the date. Then sends and empty string (which will have a '/n' character appended,
 * so that the client will then close the socket (this is purely for demo purposes and is may not be needed).
*/
package tcpdemo;
import java.io.*;
import java.net.*;
import java.util.Date;
/**
 *
 * @author Robert Morris
 */
public class ServerSock {
 
    public static void main(String[] args) throws Exception{
     
        ServerSock Server = new ServerSock();
        Server.run();
     
    }
 
 
    public void run() throws Exception{
     
        String messageIn = null;
     
        ServerSocket serverSocket = new ServerSocket(1776);
        Socket socket = serverSocket.accept();
        // Information Regarding the Connection.
        InetAddress INA = socket.getInetAddress();
        String hostIP = INA.getHostAddress();
        Date date = new Date();
        InputStreamReader inRead = new InputStreamReader(socket.getInputStream());
        BufferedReader bRead = new BufferedReader(inRead);
     
        while (( messageIn = bRead.readLine()) != null){
            if (messageIn.length() <= 2){
                break;
            }
            System.out.println(messageIn);
            String messageOut = "Your IP address is: "+ hostIP + "\n"
                    + "It is now: " + date;
            PrintStream pStream = new PrintStream(socket.getOutputStream());
            pStream.println(messageOut);
            pStream.println("");
            break;
         
        }
        socket.close();
    }
 
}

// ***** end SeverSock.java *****

_________________________________________________________________________

ClientSock.java:___________________________________________________________

/*
 *This is a client socket class that connects to the server socket
 * on a pre-specified of 1776, supplying a name of 'localhost'
 * once the socket is accepted by the server a PrintStream object is created with
 * the socket.getOutputStream() as a parameter, following this creation, a message is sent
 * to the server. 
 * Then, an InputStreamReader object is created using the socket as its parameter.
 * After this, a BufferedReader object is created using the InputStreamReader object as the parameter.
 * The socket is now ready listen for messages sent through the socket from the client.
 * there is a while loop once the readline() function is called appon the bufferedReader object
 * becuase the readline() will stop at the \n character. If it doesnt keep looping it will not be
 * able to read out multiple lines from the server. 
 * a while loop is created for the bufferedReader to be able to read multiple lines so that
 * it does not terminate until a specified message is received. In this case the specified message is 
 * a 'new line' character. 
 */
package tcpdemo;

import java.io.*;
import java.net.*;

/**
 *
 * @author Robert Morris
 */// ***** end ClientSock.java *****// ***** end ClientSock.java *****
public class ClientSock {
    
   public static void main(String [] args) throws Exception{
       
       ClientSock client = new ClientSock();
       client.run();
       
   } 
    
   public void run() throws Exception{
       String messageIn = null;
       Socket socket = new Socket("localhost", 1776);
       PrintStream pStream = new PrintStream(socket.getOutputStream());
       pStream.println("I NEED TO CONNECT NOWWWWW!!!!");
       
       InputStreamReader inRead = new InputStreamReader(socket.getInputStream());
       BufferedReader bRead = new BufferedReader(inRead);
       
       
       while((messageIn = bRead.readLine()) != null){
            
           if (messageIn.length() <= 2){
                break;
            }
              System.out.println(messageIn);
       }
       socket.close();
              
   }
    
}

// ***** end ClientSock.java *****

Friday, February 17, 2017

Data Analytics: MLE proof as biased estimator of Variance

More Data Analytics here, Here is a proof for the Variance Maximum Likelihood Estimator as a biased estimator of Variance.


Hopefully I did everything right there. 
I also wrote matlab code that compares the Sample Variance, which is shown here:
to the Maximum Likelihood Estimator for Variance. 
Here is what the code output will look like :


And here, I present you with my matlab code which can all be found at: https://github.com/GettinDatFoShow/MatLab_CODE

%% __________________________BEGIN MLE MATLAB CODE ________________________
%% ------------------------------------------------------------------------------------------------------------------

% Variance Maximum likelihood estimator tests
% @author Robert Morris - Delaware State University
% Data Analytics - Project 2 
% May take 5 - 10 seconds or so to run. 


% This Project Starts With a randomly generated 
% Population of weights for 100,000 individuals. 
% It then caluculates the mean and variance of the weights
% Each Graph/plot is explained, please Maximize figure after running for 
% Full effect.

clc; clear; close all

numPopulation= 1000000;
maxWeight= 300;
minWeight= 100;
populationWeights= zeros(1,numPopulation);



% randomly create population of ages


for i=1:1:numPopulation
    populationWeights(1,i) = randi([minWeight,maxWeight], 1);
end

meanWeight= mean(populationWeights); % find trandomly generated 
% Population of weights for 100,000 individuals. 
% It then caluculates the mean and variance of the weights
% In the overall population.
% Then it conducts a test of 10 different previously selected 
% sample sizes from the population. For each sample size, a 1000
% trials are conducted in which the program calculates the sample size 
% mean and variance, recording eahistogram(MLEVarsHisto, bins, 'facecolor', 'm');ch value during each trial.
% at the end of each trial the overall sample variance mean, sample 
% variance max outlier, and sample variance min outlier of the trials are recorded.
% After the trials are ran for each previously selected sample size, the results are 
% displayed in a figure for comparrison. The plot in the figure shows how the 
% sample variance numbers converge toward the overall population weight variance 
% as the sample size grows. he population mean weight.
weight= 0;
xtickangle(45);
for i=1:1:numPopulation
    weight= weight + (populationWeights(1,i) - meanWeight)^2;
end

weightVar= weight/numPopulation; % notice original variance calculation of size n


experiments=10;
sampleSize=100;
sampleXaxis = zeros(1,experiments);
actualWeightVars = zeros(1, experiments);
MLEVars = zeros(1, experiments);
sampleVars = zeros(1, experiments);
samples = zeros(1,sampleSize);
sampMeanWeight = 0;
sampleVariance = 0;
MLEVariance = 0;

for j=1:1:experiments % sample mean Finder
   
   for k=1:1:sampleSize
     whichPerson= randi(numPopulation-1,1) + 1; 
     samples(1,k) = populationWeights(1,whichPerson);
   end

   sampleMean= mean(samples);  % find the sample mean
   sampleWeight= 0;

   for l=1:1:sampleSize % sample Variance Finder
      sampleWeight = sampleWeight + (samples(1,l)-sampleMean)^2;
   end

   sampleVariance = (1/(sampleSize-1)) * sampleWeight; % sample variance calculation of size n-1
   MLEVariance = (1/(sampleSize)) * sampleWeight; % max likelihood sample calculation of size n
   SampleVars(1,j) = sampleVariance;
   MLEVars(1, j) = MLEVariance;
   actualWeightVars(1, j) = weightVar;
   sampleXaxis(1,j) = sampleSize;
   sampleSize = sampleSize * 2;
end   
   
fig = figure;
set(0, 'defaultfigureposition', [1300 10 900 600])
fig.NumberTitle = 'off';
fig.Name = 'Variance Maximum Likelihood Test';
hold on;

subplot(2,2,1);
x = linspace(1,10,10);
plot(x, SampleVars, 'b', x, MLEVars, 'g', x, actualWeightVars, 'r--', 'lineWidth', 2);
xticks([1,2,3,4,5,6,7,8,9,10]);
xticklabels({'100','200','400','800','1600','3200','6400','12800','25600', '51200'});
title('Variance Estimators Comparison');
xlabel('Variance Experiment Sample Sizes');
ylabel('Calculated Variance For Weights');
xtickangle(45);
legend('show');
legend({'Sample Variance', 'Variance MLE', 'Population Variance'});
lcn = 'northeast';

testMLEVarianceMax = zeros(1,experiments);
testMLEVarianceMin = zeros(1,experiments);
testMLEVarianceMean = zeros(1,experiments);
actualWeightVars = zeros(1, experiments);
sampleSize=10;

for n=1:1:experiments
    
   samples= zeros(1,sampleSize);
   numTrials=500;
   sampMeanWeight = 0;xtickangle(45);
   MLEVars = zeros(1, numTrials);
   MLEVarMax = 0;
   MLEVarMin = 0;
   MLEMean = 0;
   
   for j=1:1:numTrials % sample mean Finder

       for k=1:1:sampleSize
         whichPerson= randi(numPopulation-1,1) + 1; 
         samples(1,k) = populationWeights(1,whichPerson);
       end

       sampleMean= mean(samples);  % find the sample mean
       sampleWeight= 0;

       for l=1:1:sampleSize % sample Variance Finder
          sampleWeight = sampleWeight + (samples(1,l)-sampleMean)^2;
       end

       MLEVariance = (1/(sampleSize)) * sampleWeight; % sample variance calculation of size n-1 
       MLEVars(1,j) = MLEVariance;

   end
   
   MLEVarMax = max( MLEVars ); % max outlier of the sample variance
   MLEVarMin = min( MLEVars ); % min outlier of the sample variance
   MLEMean = mean( MLEVars ); % the average sample variance of the samples
   
   testMLEVarianceMax(1,n) = MLEVarMax;
   testMLEVarianceMin(1,n) = MLEVarMin;
   testMLEVarianceMean(1,n) = MLEMean;
   actualWeightVars(1,n) = weightVar;
   sampleSize = sampleSize*2;
   
end

subplot(2,2,2);
x = linspace(1,10,10);
plot(x, testMLEVarianceMax, 'b--', x, testMLEVarianceMin, 'g--', x, testMLEVarianceMean, 'c--*', x, actualWeightVars, 'r--', 'LineWidth',2);
xticks([1,2,3,4,5,6,7,8,9,10]);
xticklabels({'10','20','40','80','160','320','640','1280','2560', '5120'});
title('MLE Variance Convergence');
xlabel('Variance Sample Sizes');
ylabel('Calculated Variance For Weights');
xtickangle(45);
legend('show');
legend({'MLE Var Max', 'MLE Var Min', 'MLE Var Mean', 'Population Variance'});
lcn = 'northeast';


histoTestSize = 1000;
MLEVarsHisto = zeros(1, histoTestSize);
sampleSize = 5000;
samples = zeros(1,sampleSize);

for t=1:1:histoTestSize

   for p=1:1:sampleSize
     whichPerson= randi(numPopulation-1,1) + 1; 
     samples(1,p) = populationWeights(1,whichPerson);
   end

   sampleMean= mean(samples);  % find the sample mean
   sampleWeight= 0;

   for m=1:1:sampleSize % MLE Variance Helper
      sampleWeight = sampleWeight + (samples(1,m)-sampleMean)^2;
   end

   MLEVariance = (1/(sampleSize)) * sampleWeight; % max likelihood sample calculation of size n
   MLEVarsHisto(1, t) = MLEVariance;

end



SampleVarsHisto2 = zeros(1, histoTestSize);
sampleSize2 = 5000;
samples2 = zeros(1,sampleSize2);
sampleVariance2 = 0;

for t=1:1:histoTestSize

   for p=1:1:sampleSize2
     whichPerson= randi(numPopulation-1,1) + 1; 
     samples2(1,p) = populationWeights(1,whichPerson);
   end

   sampleMean= mean(samples2);  % find the sample mean
   sampleWeight= 0;

   for m=1:1:sampleSize2 % MLE Variance Helper
      sampleWeight = sampleWeight + (samples2(1,m)-sampleMean)^2;
   end

   sampleVariance2 = (1/(sampleSize2-1)) * sampleWeight; % max likelihood sample calculation of size n
   SampleVarsHisto2(1, t) = sampleVariance2;

end

    histoEffect = zeros(1,80);
    histoEffect2 = zeros(1,80);

for v = 1:1:80

    histoEffect(1, v) = weightVar;

end

bins = 50;
subplot(2,2,3);
hold on;
histogram(MLEVarsHisto, bins, 'facecolor', 'm');
title('MLE Variance Distribution, 5k Sample Size Test ');
xlabel('MLE Variances ');
ylabel('Totals Calculated');
xtickangle(45);
histogram(histoEffect, bins, 'facecolor', 'r', 'BinWidth', 2);

subplot(2,2,4);
hold on;
histogram(SampleVarsHisto2, bins, 'facecolor', 'g');
title('Sample Variance Distribution, 5k Sample Size Test ');
xlabel('Sample Variances ');
ylabel('Totals Calculated');
xtickangle(45);
histogram(histoEffect, bins, 'facecolor', 'r', 'BinWidth', 2);

%% ------------------------------------------------------------------------------------------------------------------
%% ____________________________ END MLE MATLAB CODE _______________________

Saturday, February 4, 2017

Data Analytics: Proof s² is an unbiased estimator of Variance or σ2

Here is a little data analytics for you guys. Here is a mathematical proof that  Sample Variance, otherwise written as , is an unbiased estimator of Variance or σ2. 




And Here Is a Matlab Code that make this consistency apparent as sample sizes of the main population grow larger. 

===============START MATLAB CODE======================

% estimatorSampleVarianceConsistencyTest
% @author Robert Morris - Delaware State University
% Data Analytics - Project 2


% This Project Starts With a randomly generated
% Population of weights for 100,000 individuals.
% It then caluculates the mean and variance of the weights
% In the overall population.
% Then it conducts a test of 10 different previously selected
% sample sizes from the population. For each sample size, a 1000
% trials are conducted in which the program calculates the sample size
% mean and variance, recording each value during each trial.
% at the end of each trial the overall sample variance mean, sample
% variance max outlier, and sample variance min outlier of the trials are recorded.
% After the trials are ran for each previously selected sample size, the results are
% displayed in a figure for comparrison. The plot in the figure shows how the
% sample variance numbers converge toward the overall population weight variance
% as the sample size grows.

clc; clear; close all

numPopulation= 100000;
maxWeight= 300;
minWeight= 100;
populationWeights= zeros(1,numPopulation);



% randomly create population of ages


for i=1:1:numPopulation
    populationWeights(1,i) = randi([minWeight,maxWeight], 1);
end

meanWeight= mean(populationWeights); % find trandomly generated
% Population of weights for 100,000 individuals.
% It then caluculates the mean and variance of the weights
% In the overall population.
% Then it conducts a test of 10 different previously selected
% sample sizes from the population. For each sample size, a 1000
% trials are conducted in which the program calculates the sample size
% mean and variance, recording each value during each trial.
% at the end of each trial the overall sample variance mean, sample
% variance max outlier, and sample variance min outlier of the trials are recorded.
% After the trials are ran for each previously selected sample size, the results are
% displayed in a figure for comparrison. The plot in the figure shows how the
% sample variance numbers converge toward the overall population weight variance
% as the sample size grows. he population mean weight.
weight= 0;

for i=1:1:numPopulation
    weight= weight + (populationWeights(1,i) - meanWeight)^2;
end

weightVar= weight/numPopulation; % notice original variance calculation of size n
   
sampleSize=10;

% Test Sample Variance Varibles to show
% Convergence apon larger sample sizes
experiments=10;
sampleXaxis = zeros(1,experiments);
testSampleVarianceMax = zeros(1,experiments);
testSampleVarianceMin = zeros(1,experiments);
testSampleVarianceMean = zeros(1,experiments);
actualWeightVars = zeros(1, experiments);

for n=1:1:experiments
   
   samples= zeros(1,sampleSize);
   numTrials=100;
   sampMeanWeight = 0;
   sampleVars = zeros(1, numTrials);
   sampleVarMax = 0;
   sampleVarMin = 0;
   sampleVarMean = 0;
  
   for j=1:1:numTrials % sample mean Finder

       for k=1:1:sampleSize
         whichPerson= randi(numPopulation-1,1) + 1;
         samples(1,k) = populationWeights(1,whichPerson);
       end

       sampleMean= mean(samples);  % find the sample mean
       sampleWeight= 0;

       for l=1:1:sampleSize % sample Variance Finder
          sampleWeight = sampleWeight + (samples(1,l)-sampleMean)^2;
       end

       sampleVariance = (1/(sampleSize-1)) * sampleWeight; % sample variance calculation of size n-1
       sampleVars(1,j) = sampleVariance;

   end
  
   sampleVarMax = max( sampleVars ); % max outlier of the sample variance
   sampleVarMin = min( sampleVars ); % min outlier of the sample variance
   sampleVarMean = mean( sampleVars ); % the average sample variance of the samples
  
   testSampleVarianceMax(1,n) = sampleVarMax;
   testSampleVarianceMin(1,n) = sampleVarMin;
   testSampleVarianceMean(1,n) = sampleVarMean;
   actualWeightVars(1,n) = weightVar;
   sampleXaxis(1,n) = sampleSize;
   sampleSize = sampleSize*2;
  
end

fig = figure;
set(0, 'defaultfigureposition', [1300 10 900 600])
fig.NumberTitle = 'off';
fig.Name = 'Sample Variance Consistency Experment From Population Weights';

x = linspace(1,10,10);
plot(x, testSampleVarianceMax, 'b--', x, testSampleVarianceMin, 'g--', x, testSampleVarianceMean, 'c--*', x, actualWeightVars, 'r--', 'LineWidth',2);
xticks([1,2,3,4,5,6,7,8,9,10]);
xticklabels({'10','20','40','80','160','320','640','1280','2560', '5120'});
title('Sample Variation to Actual Variation Convergence');
xlabel('Variance Experiment Sample Sizes');
ylabel('Calculated Variance For Weights');
xtickangle(45);
legend('show');
legend({'Samp Var Max', 'Samp Var Min', 'Samp Var Mean', 'Pop Weight Variance'});
lcn = 'northeast';

===============END MATLAB CODE=========================




Here is a shot what the code produces: 




Help this helps everyone on their path to becoming better computer scientist. 
Enjoy and #codeON #sudoAptGetUpdateYourBrain