Packages

scikit-learn

https://scikit-learn.org/stable/

  •  Built using the popular NumPy, SciPy and matplotlib libraries, scikit-learn is a Python AI library that makes implementing machine learning algorithms easier. It has built-in algorithms to classify objects, build regressions, group similar objects into sets (clustering), reduce random variable quantity, preprocessing data and even compare/choose your final model for you. Its extensive library of functionality ensures you don’t have to reinvent the wheel if you want to add machine learning to your project.

TensorFlow

https://tensorflow.org

TensorFlow is a framework, created by Google, to make highly complex machine and deep learning algorithms accessible to the masses. Developers build using data-flow graphs where every node (or “neuron”) represents a particular computation as specified by the developer. Then, they can pick one of many already-implemented machine/deep learning algorithms in the TensorFlow library to execute. The computations are written in Python but executed in C++, which is much faster than Python. TensorFlow saves time and increases program speed.

XGBoost

https://xgboost.readthedocs.io/

XGBoost stands for extreme gradient boosting. This Python AI library focuses on helping developers classify data and build regressions using boosted decision-tree algorithms. These trees are made up of children of weaker regression models (that represent different computation tasks). As the model is trained, new weaker regression models are added to “fill in the gaps” until no further improvements can be made. By doing this, XGBoost greatly increases scalability and performance; perfect for keeping up with your program’s growth.
Matplotlibhttps://matplotlib.org/Matplotlib allows developers to visualize datasets with a variety of different charts. It’s best used to discover and present insights on processed data. Whether it’s by using a static, dynamic or an interactive graph, matplotlib brings your data to life and is important for communication with non-technical audiences.

Natural Language Processing

Natural language processing brings all aspects of linguistics to a computer program. Its ultimate goal is primarily to understand and communicate to humans and other machines. And that’s not as simple as it sounds. There are a lot of rules and hidden information that need to be explicitly taught to machines. To bridge the gap between humans and machines, NLP uses syntactic and semantic analysis to form sentences correctly and extract meaning from them.
Everything that you communicate with using human language that isn’t a human likely uses some level of NLP. Some examples are chatbots, writing analytics tools (such as Grammarly) and smart assistants in your phones and homes.

NLTK

http://www.nltk.org

NLTK stands for Natural Language Toolkit. It’s a Python AI library that makes trivial linguistics simple through a variety of defined functions and interfaces. From tokenizing and tagging text, to identifying named entities and even displaying parse trees, NLTK is a general-purpose NLP library (or ‘toolkit’) that belongs in any language-based project.

spaCy

https://spacy.io/

spaCy has been described by its developers as “the Ruby on Rails of Natural Language Processing.” Indeed, through its extremely simple API, spaCy makes processing large swathes of text fast and efficient. By providing and integrating tokenizer, tagger, parser, pre-trained word vectors and named entity recognition facilities into one library, spaCy can help your program understand all aspects of a text, or simply pre-process it for one of the other AI libraries to deal with later.

Gensim

https://radimrehurek.com/gensim/

Gensim aims to make the process of identifying the underlying topic of a piece of text (known as topic modeling) substantially easier. It handles the entire modeling process, from processing the text (into a dictionary of tokens) to building the topic model itself all without having to load the entire text into memory.

Neural Networks

Neural networks allow programs to literally use their brains. They use systems of nodes (modeled after the neurons in human brains) with each node representing a particular variable or computation. A particular task (where we need to map an input to a particular output) starts at one input “neuron” (or layer) in the system and can get to the end, processed result (or output) by taking any number of pathways (computations). The pathways that lead to a more successful/closer mapping to the output are strengthened and kept while failing/inaccurate pathways are weakened and discarded. And this is precisely how our brain works. It makes for an incredibly efficient form of learning.
Neural networks can allow us to implement facial recognition algorithms, predict the weather more effectively or even simulate marketing campaigns for businesses.

FANN

https://github.com/libfann/fann

Fast Artificial Neural Network Library, or FANN, implements artificial neural networks in C (which is what makes it up to 150 times faster than other libraries) while making them accessible in a number of different languages, including Python. It’s incredibly easy to use, allowing for the creation, training and running of an artificial neural network in just three function calls. With its incredible documentation, comprehensive training framework and parameter versatility, it’s a must-have for a project which employs neural networks.

ffnet

https://github.com/mrkwjc/ffnet

ffnet is a Python AI library for implementing feed-forward neural networks. It uses a graphical user interface to visualize training datasets. Another strong benefit is its automatic data normalization feature, saving a lot of time in the pre-processing stage of your workflow. Ffnet implements its core functions in Fortran resulting in greatly improved program speed (compared to Python native solutions).

PyTorch

https://pytorch.org/

PyTorch is built for tensor computation tasks (using GPU acceleration) and building more durable deep neural networks on a tape-based autograd system. The last point means that the neural networks PyTorch builds don’t have to be recreated every time the use case changes, thereby improving speed and scalability. Its main use cases lie in replacing NumPy to instead use the power of GPUs (versus CPUs), and as a deep learning research platform that is highly customizable and fast.

Computer Vision

Like the name suggests, computer vision allows machines to both see and understand what they’re seeing. Through videos and images, machines can figure out what objects they’re looking at and classify them into appropriate categories. That smart CAPTCHA software you see barring you entry to your favorite website is actually getting you to train its computer vision algorithm for it. From facial recognition software to autonomous driving systems, computer vision algorithms are everywhere and only growing in complexity.

OpenCV

https://opencv.org/

Open Source Computer Vision Library (OpenCV) provides developers with over 2,500 optimized algorithms for a variety of computer vision use cases. From detecting/recognizing faces to classifying human actions, OpenCV makes understanding visual information a simple matter of calling the right function and specifying the right details. Paired with its robust community and extensive documentation, OpenCV is perfect for adding computer vision infrastructure to a project.

SimpleCV

http://simplecv.org/

Where OpenCV focuses on comprehensiveness and customizability, SimpleCV focuses on making computer vision easy. The learning curve is much smaller to the point where getting images from a camera is as simple as initializing a camera (using Camera()) and getting its image (using Camera.getImage()). This Python AI library is a stellar choice for developers focused on common computer vision applications as opposed to highly customized solutions.

Expert Systems

Want the advice of an expert? You can ask one (for a price) or you can get a machine to think like an expert for you. As their name suggests, expert systems aim to model an expert’s reasoning process using a knowledge base (heaped with relevant data), an inference engine (to reason based on the data), and an explanation interface (to communicate its reasoning to the user).
So experts such as doctors and engineers can diagnose the type and degree of lung cancer in patients (PXDES) or construct computer systems based on user preferences (R1/XCON). 

PyCLIPS

http://pyclips.sourceforge.net/web/

PyCLIPS provides an inference engine to Python applications. It provides a rules-based engine as binary modules inside the library that are accessed using classes and functions. The engine itself stays “alive” in a separate memory space to the Python space, so inferences and rules are persisted as your program grows in functionality.

Experta

https://pypi.org/project/experta/

Also inspired by CLIPS, Experta is a rule engine that pairs a set of facts with a set of rules based on those facts. Then, actions are executed based on these rules. All facts and rules are held by the implemented knowledge engine which determines the expert output of the system when it is called.

Robotics and Autonomous Vehicles

Most artificial intelligence is usually applied to software, not hardware. With robotics, AI meets the physical world through a plethora of different sensors modeling the human senses. From cameras to model sight and radar to model physical touch, AI algorithms must interpret the data from these sensors to allow a machine to function safely and effectively.

AirSim

https://github.com/Microsoft/AirSim

AirSim is a Unity/Unreal Engine based simulator built by Microsoft. While it’s not a Python AI library itself, AirSim allows developers to test and experiment with autonomous vehicle algorithms without actually needing to possess the physical hardware for it. It uses APIs to hook into your code so it remains language independent. In this way, it provides a sandbox for you to play around with autonomous vehicles without the costs and safety issues you’d need to overcome in the real, physical world.

Carla

https://github.com/carla-simulator/carla

Where AirSim can cater to a wide variety of autonomous vehicles (such as cars and drones), Carla caters specifically to autonomous driving research. It has more driver-specific features like flexible vehicle sensors, environmental conditions as well as a wide variety of buildings and vehicles already implemented.

Bullet 

https://github.com/bulletphysics/bullet3

Bullet provides a physics sandbox where robotics/virtual reality can be created and tested. This allows the development of capital-intense development tasks like collision detection, locomotion skills and reinforcement learning (which requires a lot of failure) to be conducted in a harmless simulation scenario.

No comments: