Familiar with popular machine learning frameworks such as Tensor Flow, Py-Torch, Caffe 2, Spark ML Lib, MS CNTK and CNTK 2, Apache MXNet , Scikit learn, Theano.
A. Tensor Flow
- Google’s portable machine learning framework.
- Performs and scales well.
- Uses data flow graphs for performing numerical computations.
- Has models and algorithms that are heavy on deep learning.
- Good with GPU’s (for training) or Google CPU’s (for prediction at production scale)
- Excellent support for python.
- Good for displaying – Tensor Board.
- Flexible architecture to deploy on CPUs or GPUs in a desktop server or mobile device without re-writing code.
- A lot of tutorials.
- Application Google photos, Google recognition Google search, Google translate.
- Adoption: Snapchat, Dropbox, Deepmind, Twitter, Uber, Intel etc.
- Supported platforms – Android, IOS, Linux, Mac , Windows.
POINTS TO PONDER
- A visualization tool which tells you graphically, on how the code is being executed and how the train model is training whether it’s over-fitting or generalized enough. You can see how the number accuracy is changing so it’s a real ad good visualization model. It also visualizes the graph and how the graph is working when you are preparing it.
Tensor flow is used for bunch of applications also works with lots of operating system.
2. PyTorch has something called wisdom with which also is a visualization tool. But one think for sure that they don’t have that it doesn’t show the visualization of how the graph is working when you are executing.
If you are into a developing a production ready model you may want to find out how the model is working. When you are using visualization tool so from that sense I think tensor flow has a unique capability.
3. When you have a image processing model, that runs on a CPU or GPU and if you are trying to create a make it on a mobile device which actually has a less computing power. Then there are so many factors or techniques you bring in and there is something a concept called PRUNING.
4. PRUNING– In image processing there is a kernel which is actually doing lot of mathematical computations. The size of the kernel is an issue it has direct impact on the other or the time it takes for computing. But if you reduce the size of your image and also reduce the size of your kernel the computation will go down but also does the efficiency and accuracy. The accuracy reduces computation time comparison balancing is a big issue of lace.
Yolo an image processing has such brazen they have actually developed real nice image processing model which is extremely fast. But they also have certain earlier versions of models called faster or CNN they sacrifice a little bit of accuracy for the sake of speed.
Especially if you are working on autonomous cars and if you want to have the machine figuring out what’s happening with what object is on the struct and decide whether to still go ahead or not. You want the model to predict even with 80% accuracy or 85% accuracy or 90% accuracy but in a very very short time. Let us say 20 milliseconds or 30 milliseconds not 1 or 2 seconds car driving at the speed of 60 miles per hour so you will be actually covering so much more distance it’s going to be dangerous. So once you get a wrapper it is easy to program but if you want to really customize it to make sure it’s most efficient one would actually go one or two levels down so in this case tensor flow could be used.
5. Cafes, a different model we will discuss about cafe 2 especially. Because it is the balance between a wrapper, to use wrapper or not to use. If you are not trying to understand how model works what machine learning is? If you want to experiment a few things yourself as a beginner or even intermediate chara’s or sonnet any other apples would be really good.
B. CAFFE 2
- Expensive architecture:- Single flag to train on GPU or CPU.
- Speed:- Can process over 60 M images per day with a single NVIDIA K40 GPU.
- Supported Platforms: Linux, Windows, Mac.
- deep learning tool in c++.
- Model Zoo
- Command line, Python, and MATLAB interfaces.
- Seprates network architecture from implementation.
- Caffe stores, communicates, and manipulates the information as blobs(binary large objects).
- Blobs are to caffe as tensors are to Tensor flow.
- It is not meant for non-computer vision tasks such as sound, time series, or text.
POINTS TO PONDER
- Facebook actually supported a lot in cafe2 and by the way Python is also Facebook’s contribution and that’s another interesting factor google actually has thrown in their weight behind Tensor Flow, and they want you to use tensor-flow for all the purposes any activities you have whether it’s audio processing, signal processing, text image processing for your classroom environment or a industry environment they want you to use Tensor-Flow.
- Whereas Facebook want you to use pie-Taj,if you are developing models, more of research type of models or developing really strong performing models. But if you want to use a model for production where you can quickly develop something and really works efficiently using sometimes the pre-train models they want you to use caffe-2.
- Facebook has two frameworks caffe 2 and pirates.
- Caffe 2 is even more efficient in terms of flagging between GPU and CPU is just one single line of code.
- Caffe 2 is actually very popular for image processing. I have seen 90% of the papers have read in image processing using cafe as a framework and using the model zoo there is a concept called model zoo. Lots of models image processing models, pre-trained models are available in caffe and it’s very easy to use also.
- And the second best thing about caffe-2 is you actually don’t write any code many time. Can you imagine that you don’t write any code, what you write is some sort of a JSON like file its called Proto-txt.
What is Proto txt?
- It is a bunch of lines of code which is doing nothing else but telling the model cafe to say, okay go look for data here and pick up the data in the sequence use this model and train it.
- The prototype is almost like a text file. It could run as small as may be just a hundred lines or 200 lines but on the other extreme it could even run 7000 lines. For example- the 150-150 layer Res Net network proposed by Microsoft in 2015 was developed in caffe and is running on cafe, it had seven thousand lines of proto txt. Which is actually two much for a human being to write so many JSON like four lines. You should actually go on Google and find out How a Prototype file looks like so it you literally don’t write too much of code there.
- So if you are using some image processing if you want to use some pre-existing models/ pre-train models caffe 2 would be good area, just start working on it.
- So other than computer vision tasks I have not seen too many people using caffe2.
Some recommendations, if you are developing a model for your case:-
a. Let’s say you are a hospital or health care company or if you are a retail company trying to take images of people what they’re putting in the baskets and then analyze what’s in the basket and make some recommendation, i.e for developing a fundamental new type of model used by PyTorch.
b. For you to run a model on production or if you want to use a model a pre-existing model from the model zoo and train a model and then use it for production purposes, run it on caffe-2.
c. If you are developing or if you want to play around with image processing models use tensor- flow or pie-charts or if you want it even a developer model use tensor flow or pie-charts or if you want it even a developer model use tensor flow not caffe-2.
d. I personally recommend caffe-2 for companies where actually they can be used if they want to have a production data model being run.
e. Google is the forefront of machine learning.
- A scientific computational framework.
- No create graph and run graph run on the fly.
- Python Wrapper.
- Strong CUDA and CPU back-ends.
- Mature Machine learning and optimization packages.
- Bindings to the latest version of Nvidia CUDNN.
- Multi- GPU support and parallelizing packages.
- Large community of developers.
- Actively used in Facebook, Google and twitter.
POINTS TO PONDER
- Py torch is very close to regular python code.
- When you run tensor flow you still have to download or invoke NUMPY(Python’s extension module).
- It’s been actively used by Facebook because that’s one of the sponsor.
- Py-Torch works really good for some other research projecs.
D. Spark ML Lib (APACHE Spark)
- Open source ML library for spark.
- ML algorithms for classification, regression, clustering, and collaborative filtering along with tools for feature extraction, transformation, dimensional reduction.
- Selection and tools for constructing, evaluating and tuning machine learning pipelines.
- Utilities for saving and loading algorithms, models, and pipelines for data handling and for doing linear algebra and statistics.
- Written in Scala.
- Uses the linear algebra packages breege.
- Has full API’s for Scala, java, python and partial for R.
- Easiest to work with jupyter notebooks.
- Not really set-up model and train DNN as tensor-flow, MXNet, Caffe, CNTK.
E. MS CNTK AND CNTK 2 (Microsoft CNTK)
- Fast and easy to use deep learning packages.
- Limited in scope compared to Tensor flow .
- Good variety of models and algorithms.
- Excellent support for Python and jupyter notebooks.
- Automated deployment for windows and Ubuntu linux.
- Python examples and tutorials.
- FFN(Feed Forward)
- RNN/LSTM(Recurrent/ Long short term memory)
- Batch normalization.
- Seq to Seq with attention.
- Supports reinforcement learning, generation adversarial networks, supervised and unsupervised learning, automatic hyper parameter tuning.
- Achieve parallelism using GPU.
- Supports Python, C++, Brain-Script, and c#.
F. Apache MXNet (DMLC MXNet)
- Amazon’s portable, scalable deep learning library.
- Supports Python, R, Scala, Julia, and c++.
- Dynamic dependency schedular that automatically parallelizes both symbolic and imperative operations on the fly.
- API is a superset of what’s offered in torch, theano, chainer and caffe.
- Similar to Tensor-Flow.
- MX Net tutorials for computer vision covers image classification and segmentation using CNN, Object detection using faster R-CNN, neural art and large scale image classification. using a deep CNN and the image Net Data-set.
- Additional tutorials for NLP, speech recognition, adversarial networks and both supervised and unsupervised machine learning.
G. Scikit learn
- Wide selection of robust ML algorithms.
- Good selection of algorithms for classification, regression, clustering, dimensional reduction, model selection, and pre-processing.
- Good documentation and examples.
- A robust and well proven ML library for Python.
- Uses Cython(the python to c-compiler)
- No deep learning, RL, Lacks, graphical models and sequence prediction.
- Can’t be used other than with python.
- Lacks grinded work flow for algorithms.
- Tensor-flow and caffe-2 is good for mobile devices.
- Users of spark could use ML lib.
- Users of spark could use ML lib.
- Free python symbolic manipulation library.
- Specially utilized for the gradient based methods such as deep learning, that require repeated computation.
- Compiled and executed on either CPU or GPU.
- Uses CUDA library.
- Includes custom C and CUDA code generators tailored for different types, sizes and shapes of inputs.
- Large user community.
THEANO is just a theoretical model for all practical purposes it is dead.
- Prefer Tensor flow for majority of the tasks. (CPU AND GPU)
- Club with keras or other high level wrapper for ease of programming.
- Py-torch is programmed very efficiently, currently used a lot in research.
- Tensor flow and Caffe-2 is used in industry.
- Caffe-2 is used for model zoo.