Welcome to MIND of YOUTH


What is Machine Learning?

What is Machine Learning?

machine learning


5 Must have skills to become machine learning engineer

First let us understand what machine learning is:-

  • In simple words, machine learning is all about making computer to perform intelligent tasks without explicitly coding.
  • This is achieved by training the computer with lots of data.
For Example
  • Detecting whether a mail is spam or not.
  • Recognising handwritten digits.
  • Fraud Detection in the transaction.
  • And many such applications.



Now let us see what are the top five skills to get a Machine Learning job

  1. Math skills.
  2. Programming skills.
  3. Data engineer skills.
  4. Knowledge of machine learning algorithms.
  5. And finally, the Knowledge of machine learning frameworks.


Math skills

Under math skills, we need to know probability and statistics, linear algebra and calculus.

Probability and statistics:

Machine learning is very much closely related to statistics. You need to know the fundamentals of statistics and probability theory, descriptive statistics, bayes’s rule and random variable, probability distribution, sampling, hypothesis testing, Regression, and decision analysis.


Linear algebra

You need to know how to work with matrices and basic operations on matrices. Such as matrix addition, subtraction, scalar, and vector multiplication, inverse, transpose, and vector spaces.


In calculus we need to know the basics of differential and integral calculus.


               Programming skills           

  • A little bit of coding skills is enough.
  • But it is preferred to have knowledge of Data structures, Algorithms and oops concepts and diagrams.


  • Some of the popular programming languages to learn for machine learning are python, java, and c.
  • It’s your preference to master an anyone programming language.
  • But it is advisable to have a little understanding and what their advantages and disadvantages are over your referred one.


Data engineer skills

Ability to work with large amounts of data(BIG DATA),  data pre-processing, knowledge of SQL and NO-SQL,  ETL(Extract Transform and load) operations, data analysis and visualisation skills.


How to become a DATA ENGINEER ?

Data Engineer is somebody who is concerned with moving data in and out of Hadoop ecosystem being able to give to scientists and data analysts, better views into the data.

  • So we are involved for day to day interactions of how that data is coming in.
  • And it is how we ingesting that data how are we creating this application then tuning those applications so that the data comes in faster % l to support those business analyst. 
  • Those business decisions and data scientists in creating better models and having just more data to put their head on.


What is Hadoop?

Have you ever wondered how Google does their queries into their mountain of data?


How Facebook is able to quickly deal with such large quantities of information?

Here we are going into the wild west of data management called BIGDATA.

  • Now while you may or may not have heard of big data, and the other terms like Hadoop or Map-Reduce. You can be sure that they will be a regular part of your conversation in the coming months and years.
  • This is because 90% of the worlds data was generated in just the last two years and this accelerated trend in going to continue.
  • All this new data is coming from smartphones, social networks, trading platforms, machines and other sources.
  • Since most of this data is already available, the question is whether we are going to take advantage of it?
  • In the past, when the larger and larger quantities of data needed to be interrogated. Business would simply write larger and larger checks of their database vendor of choice.
  • However, in the early 2000’s companies like Google were running into a wall their vast quantity of data were simply too large to pump through a single database bottle neck and they simply couldn’t write a large enough check to process the data.
  • To address this their google labs team developed an algorithm that allowed for large data calculations to be chopped up into smaller chunks, and mapped to many computers then when the calculations were done be brought back together to produce the resulting dataset.
  • They called this algorithm Map-Reduce.
  • This algorithm were later use to develop an open source project called Hadoop which allows applications to run using the Map-Reduce algorithm.


There are usually two ingredients that are driving organizations into investigating Hadoop.

  1. One is lot of data, generally larger than 10 Terabytes.
  2. The other is high calculation complexity like statistical simulations.
  • Any combination of those two ingredients with the need to get results faster and cheaper will drive your return over investment.
  • Over the long run, Hadoop will become part of our day to day information architecture.
  • We will start to see Hadoop, playing a central role in statistical analysis, ETL processing and business intelligence.


Knowledge of machine learning algorithms

One should be familiar with popular machine learning algorithms such as classification algorithms, anomaly detection algorithm, regression algorithms, clustering algorithms, reinforcement algorithms.

  1. Classification algorithms 
  • These are used to classify a record.
  • Also they are used for questions which can have only a limited number of answers.

For example:- 

  • Is it cold? (Yes or No)
  • Will you go to work today? (Yes, No or May be)

Note:-  When you have only two choices its called as 2 class classification. If you have more than 2 choices its called as multiple class classification.


    2. Anomaly Detection Algorithm
  • It analyses a certain pattern and alerts you whenever there is change in pattern.

anamoly detection

For example:-

  • In real life, your credit card company uses these anomaly detection algorithms and flag any transaction which is not usual as per your transaction history.


   3. Regression Algorithms
  • Regression algorithms are used to calculate numeric values.

For example:-

  • What will the temperature tomorrow?
  • How much discount can you give on particular item?


  4. Clustering Algorithms
  • It helps you to understand the structure of data set.
  • These algorithm separates the data into groups or clusters, to ease out the interpretation of the data.
  • By understanding how data is organised, you can better predict the behaviour of a particular event.


  5. Reinforcement Algorithms:-
  • These algorithms were designed as to how brains of humans or rats respond to punishments of rewards. They learn from out-comes and decide on next action.
  • They are good for systems which have to make lot of small decisions without human guidance.

For example:-

  • A system which play chess.
  • A temperature control system, when it has to decide whether temperature should be increased or decreased.
 And finally, the knowledge of machine learning frameworks 


Monika Khilrani

Hii, I am Monika Khilrani amazed by Computer Science, I am a technology enthusiast, author and founder of www.mindofyouth.com, writing is not only my profession, it is my passion too because DATA=Knowledge + experience.


click here! Posted on6:40 am - Jan 20, 2019

Bless you-thank you-Any chance you will write more about this?


web marketing strategy Posted on1:47 am - Jul 26, 2019

Very good post. I’m going through some of these issues as well..


website marketing Posted on5:44 pm - Aug 20, 2019

You have made some decent points there. I checked on the internet for more information about the issue and found most people will go along with your views on this site.