Career Advice
Most Popular
Top Courses to Help You Build a Career in Data Science
27 October 2022
Add To Wishlist
The world is generating data increasingly wherein consumers, sensors, or scientific experiments emit data points every day. Working with data can make up a significant part of the job in finance, business, administration, and the natural or social sciences. Working efficiently with small or large datasets has become a valuable skill.
Features
Table of Contents
Description
The world is generating data increasingly wherein consumers, sensors, or scientific experiments emit data points every day. Working with data can make up a significant part of the job in finance, business, administration, and the natural or social sciences. Working efficiently with small or large datasets has become a valuable skill.
Description
A field that is at the intersection of many fields, including data mining, machine learning, and statistics, to name a few.
Introducing Data Mining
Data mining provides a way for a computer to learn how to make decisions with data. This decision could be predicting tomorrow's weather, blocking a spam e-mail from entering your inbox, detecting the language of a website, or finding a new romance on a dating site. There are many different data mining applications, with new applications being discovered all the time.
We start our data mining process by creating a dataset, describing an aspect of the real world. Datasets comprise of two aspects:
- Samples that are objects in the real world; for example, this can be a book, photograph, animal, person, or any other object.
- Features that are descriptions of the samples in our dataset. Features could be the length, frequency of a given word, number of legs, date it was created, and so on.
Learn practical data mining techniques from Future Learn and the famed University of Waikato.
Applications and Use Cases
Authorship Analysis
Authorship analysis: authorship analysis to verify whether certain documents were indeed written by their supposed authors. For example, we can analyze Shakespeare's plays to determine his writing style before testing whether a given sonnet does originate from him.
A more modern use case is that of linking social network accounts. For example, a malicious online user could set up multiple online social networks accounts. Being able to link them allows authorities to track down the user of a given account.
Authorship attribution is a classification task by which we have a set of candidate authors, a set of documents from each of those authors (the training set), and a set of documents of unknown authorship (the test set).
In authorship attribution, we typically have two restrictions on the tasks.
- First, we only use content information from the documents and not metadata about the time of writing, delivery, handwriting style
- The second restriction is that we do not look at the topic of the documents; instead, we look for more salient features such as word usage, punctuation, and other text-based features
Principal component analysis (PCA) is an unsupervised linear transformation technique widely used across different fields, most prominently for dimensionality reduction. PCA helps us identify data patterns based on the correlation between features. Other popular applications of PCA include exploratory data analyses and de-noising of signals in the stock market trading and the analysis of genome data and gene expression levels in bioinformatics.
Learn more on practical data science with python from Careervira Website and to learn about more courses on data science, Click here.
Sign Up Now
To Make Faster & Better Learning Decisions
Search and compare from over 50K top courses from leading partners & institutes
Get comprehensive ranking, analysis of top courses, and Institutes
Get career and learning advice from top professionals & industry experts