Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (2024)

Table of Content

1. Embracing the Anomalies

2. Defining Outliers in the Realm of Statistics

3. The Impact of Outliers on R-squared and Model Accuracy

4. Methods and Madness

5. When Data Tells a Different Story?

6. How Outliers Challenge Predictive Models?

7. Outliers That Changed the Game

8. Techniques for Cleaner Data

9. The Paradox of Outliers in Data Analysis

1. Embracing the Anomalies

In the realm of data analysis, anomalies are often viewed with a mix of intrigue and caution. They are the statistical deviants that refuse to conform to the well-trodden path of the majority. Yet, it is precisely these outliers that often hold the key to deeper insights and breakthroughs in understanding complex phenomena. Rather than dismissing these anomalies as mere errors or noise, embracing them can lead to a richer, more nuanced narrative of the data at hand.

From the perspective of a statistician, outliers can signal an error in data collection or an indication of a significant, yet rare event. For a data scientist, they might represent a chance to delve into machine learning's robustness, testing algorithms against edge cases. Meanwhile, a business analyst might see outliers as opportunities or threats, depending on whether they indicate untapped markets or potential financial pitfalls.

1. Statistical Significance: Outliers can dramatically affect the mean and standard deviation of a dataset, leading to misleading conclusions. For instance, the average income level in a neighborhood might appear affluent due to a single billionaire resident, overshadowing the economic reality of the majority.

2. R-squared Values: In regression analysis, an outlier can disproportionately influence the R-squared value, which measures the proportion of variance in the dependent variable that's predictable from the independent variable(s). A single data point far from the trend line can reduce the R-squared value, suggesting a poor fit that might not be representative of the overall data relationship.

3. Leveraging Anomalies: Consider the case of quality control in manufacturing. An outlier in the form of a defective product could lead to an investigation that uncovers a systemic issue in the production line, ultimately improving the process and product quality.

4. Predictive Modeling: In finance, an outlier might be a sudden market crash or boom. Predictive models that can account for such anomalies could provide a hedge against risk or capitalize on unexpected opportunities.

By integrating these diverse perspectives, we can construct a more holistic approach to data analysis. Anomalies shouldn't be feared or avoided; they should be understood and harnessed. They challenge our assumptions and push the boundaries of our knowledge, acting as the catalysts for innovation and progress in the ever-evolving narrative of data.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (1)

Embracing the Anomalies - Outliers: Outliers: The R squared Rebels in Data Analysis

2. Defining Outliers in the Realm of Statistics

Outliers are akin to the mavericks of the data world, deviating markedly from the rest of the data points and often prompting a double-take. They are the statistical anomalies that, at first glance, seem to disrupt the harmony of data sets. Yet, their presence is not merely for eccentricity; outliers carry with them a wealth of information that, when interpreted correctly, can unveil patterns, errors, or novel discoveries that would otherwise remain obscured. In the realm of statistics, defining an outlier is not always straightforward. It requires a nuanced approach that considers the context, the nature of the data, and the intended analysis.

1. Statistical Definition: Traditionally, an outlier is defined as a data point that lies outside the interquartile range (IQR), typically 1.5 times above the third quartile or below the first quartile. For a data set $$ x_1, x_2, ..., x_n $$, if we denote the first quartile as $$ Q1 $$ and the third quartile as $$ Q3 $$, then any data point $$ x_i $$ is considered an outlier if $$ x_i < Q1 - 1.5 \times IQR $$ or $$ x_i > Q3 + 1.5 \times IQR $$.

2. Contextual Definition: Sometimes, what constitutes an outlier may depend on the context. For instance, in a class of students, a score of 100 on a test might be an outlier if most scores are around 70. However, if the test is exceptionally easy and most scores are high, a 100 might not be an outlier.

3. Influential Points: In regression analysis, an outlier might not just be a distant point but also an influential one. An influential point is one that, if removed, would significantly change the result of the regression. This is often measured by the leverage or Cook's distance.

4. Multivariate Outliers: When dealing with multivariate data, outliers can be more complex. They may not stand out in any single dimension but are unusual in the multidimensional space. Detection methods like Mahalanobis distance are used to identify these outliers.

5. Domain-Specific Outliers: Different fields may have different criteria for outliers. For example, in finance, a transaction that is significantly higher than usual might be flagged as an outlier for fraud detection.

Examples:

- In a data set of house prices, a mansion priced at $50 million might be an outlier if most homes are under $1 million.

- In a clinical trial, if most patients show a moderate reaction to a drug but one shows a severe adverse effect, that patient's data might be considered an outlier.

Understanding outliers is crucial because they can affect the results of statistical analyses. If outliers are the result of measurement error or data entry error, they should be corrected or removed. However, if they are accurate representations of the data, they can provide valuable insights and should be carefully studied. The key is to approach outliers with a blend of statistical techniques and domain knowledge to discern their true nature and impact on the analysis.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (2)

Defining Outliers in the Realm of Statistics - Outliers: Outliers: The R squared Rebels in Data Analysis

3. The Impact of Outliers on R-squared and Model Accuracy

Model Accuracy

Outliers in data can be likened to rebels in a harmonious society; they disrupt the norm and challenge the status quo. In the realm of data analysis, these outliers are not just numbers that fall far from the cluster of other data points; they are pivotal players that can dramatically influence the outcome of statistical models, particularly the R-squared value and overall model accuracy. R-squared, a statistical measure of how close the data are to the fitted regression line, is often used as a gauge of model accuracy. However, its sensitivity to outliers can lead to misleading interpretations.

From a statistical perspective, outliers can inflate or deflate R-squared values, depending on their position relative to the regression line. If an outlier lies close to the line, it can artificially increase the R-squared value, giving a false sense of a well-fitting model. Conversely, an outlier far from the line can decrease R-squared, suggesting a poor fit when the model might actually be appropriate for the majority of the data.

Here's an in-depth look at how outliers impact R-squared and model accuracy:

1. Inflation of R-squared: An outlier that falls near the regression line can inflate the R-squared value. This is because R-squared reflects the proportion of variance explained by the model. An outlier close to the line increases the variance explained, leading to a higher R-squared.

2. Deflation of R-squared: Conversely, an outlier far from the regression line can decrease the R-squared value. This outlier introduces more unexplained variance, which lowers the proportion of variance that the model accounts for.

3. Leverage Points: Outliers with high leverage can have a disproportionate impact on the regression line itself. These are data points that do not necessarily have a large residual but are at an extreme value of the predictor variable. They can pull the regression line towards themselves, affecting the slope and intercept.

4. Cook's Distance: This is a measure that combines the leverage of a data point with the size of its residual. Points with a high Cook's distance are considered to be influential outliers because they have a significant impact on the regression line's parameters.

5. Masking and Swamping: Outliers can also cause other data points to become outliers (masking) or make actual outliers appear to be part of the normal data (swamping). This can complicate the detection and treatment of outliers.

6. Robust Regression Techniques: To mitigate the impact of outliers, robust regression techniques such as RANSAC, theil-Sen estimator, or Huber regression can be used. These methods are less sensitive to outliers and provide a more accurate representation of the central tendency of the data.

For example, consider a dataset where the relationship between hours studied and exam scores is being analyzed. If a student who studied an unusually low number of hours but scored exceptionally high is included in the dataset, this point would be an outlier. In a standard least squares regression, this single data point could significantly alter the regression line, potentially leading to an overestimation of the effect of study hours on exam scores. This is where robust regression techniques would be beneficial, as they would reduce the influence of this outlier on the model.

While outliers can provide valuable insights and highlight anomalies, their presence in a dataset must be carefully managed to ensure the accuracy and reliability of statistical models. Understanding the impact of outliers on R-squared and model accuracy is crucial for any data analyst seeking to draw meaningful conclusions from their data.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (3)

The Impact of Outliers on R squared and Model Accuracy - Outliers: Outliers: The R squared Rebels in Data Analysis

4. Methods and Madness

Outliers are like the renegades of the data world, often dismissed as mere anomalies, yet they hold the power to either reveal a hidden truth or lead us astray. The quest to detect these statistical mavericks is a blend of art and science, where traditional methods meet innovative techniques. It's a journey through a labyrinth of numbers where each turn could be a step towards insight or a descent into data madness. From the perspective of a data scientist, outliers are not just noise; they're signals that beg for interpretation. For the statistician, they're potential sources of bias that could skew results. And for the business analyst, they could be the key to understanding unusual but significant market trends.

Here's an in-depth look at the methods used to detect outliers:

1. Standard Deviation Method: This approach considers data points that fall outside of a certain number of standard deviations (usually 2 or 3) from the mean as outliers. For example, in a dataset of test scores, if the average is 70 with a standard deviation of 10, scores below 50 or above 90 might be considered outliers.

2. Interquartile Range (IQR) Method: The IQR is the range between the first quartile (25th percentile) and the third quartile (75th percentile). Data points that are 1.5 times the IQR above the third quartile or below the first quartile are outliers. For instance, if the IQR of house prices in a neighborhood is $100,000, houses priced more than $150,000 above the third quartile or below the first quartile might be outliers.

3. Z-Score Method: A Z-score represents how many standard deviations a data point is from the mean. A high absolute Z-score indicates that the data point is a potential outlier. A common cutoff is a Z-score of 3 or -3.

4. Box Plot Method: Box plots visually display the distribution of data and highlight the outliers as points outside the 'whiskers' of the plot. This method is particularly useful for identifying outliers in large datasets.

5. DBSCAN Clustering: Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a machine learning algorithm that groups together points that are closely packed together and marks points that lie alone in low-density regions as outliers.

6. Isolation Forest: An ensemble method that isolates anomalies by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature.

7. Visual Inspection: Sometimes, the best way to detect outliers is through a simple visual inspection of the data. Scatter plots can help identify points that fall far from the cluster of data.

8. Grubbs' Test: This statistical test is used to detect a single outlier in a univariate dataset that follows an approximately normal distribution.

9. Mahalanobis Distance: This metric measures the distance between a point and a distribution. It's useful for detecting outliers in a multivariate dataset.

Each method has its strengths and weaknesses, and often, a combination of methods is used to ensure robust outlier detection. For example, while the standard deviation method is simple and widely understood, it assumes a normal distribution and can miss outliers in a skewed dataset. On the other hand, the IQR method is more robust to skewed distributions but can be less sensitive to outliers in a symmetric distribution.

In practice, outlier detection is as much about understanding the context of the data as it is about crunching numbers. A high credit card transaction might be an outlier in a dataset of typical transactions, but it could also be a legitimate purchase during a holiday sale. Therefore, it's crucial to interpret outliers not just statistically, but also within the narrative of the data.

Detecting outliers is a critical step in data analysis, as it can significantly impact the results of statistical models and the decisions based on them. By carefully examining outliers and considering their potential causes and implications, analysts can ensure that their conclusions are accurate and reliable. Whether they're a sign of data entry errors, a need for data transformation, or an indication of a novel phenomenon, outliers challenge us to look closer and think deeper about our data.

Working on launching your tech product?FasterCapital helps you test and launch your product and provides all the technical and business expertise neededJoin us!

5. When Data Tells a Different Story?

In the realm of data analysis, outliers are not just anomalies; they are the harbingers of a deeper, often untold story. These statistical mavericks challenge the norm, question the standard, and push analysts to look beyond the obvious. They are the data points that lie outside the general clustering, the ones that do not follow the predicted pattern set by the majority. The presence of outliers can significantly affect the results of our data analysis and predictive modeling. They can skew our results, leading to misleading conclusions if not addressed properly.

From a statistical perspective, an outlier's impact is profound. In regression analysis, for example, outliers can drastically alter the R-squared value, which measures the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. When outliers are present, the R-squared value may either overestimate or underestimate the strength of the relationship between the variables.

Here are some in-depth insights into the outlier effect:

1. The Leverage Effect: Outliers can have a 'leveraging' effect in regression analysis, disproportionately influencing the slope of the regression line. This can lead to a model that fits the outlier well but performs poorly for the rest of the data.

2. The Masking Effect: Sometimes, one outlier can hide the presence of another. This phenomenon, known as 'masking', complicates the detection process, requiring analysts to perform iterative checks.

3. The Magnification Effect: In time-series data, a single outlier can cause a ripple effect, magnifying its impact over subsequent data points and leading to a compounding error in predictive models.

4. The Distortion Effect: Outliers can distort the perceived distribution of data. For instance, a dataset with a normal distribution might appear skewed if outliers are not accounted for, affecting all related statistical inferences.

5. The Innovation Effect: On the flip side, outliers can sometimes indicate innovation or a new trend. They can be the first sign of a shift in the data generating process, such as a new consumer behavior pattern emerging in market research data.

To illustrate these effects, let's consider the example of a retail company analyzing customer spending data. Suppose the dataset includes transactions from a recent promotional campaign. Most customers spent around $50, but there were a few transactions where customers spent over $1000 due to a system error that allowed multiple coupon redemptions. These transactions are outliers and could lead to incorrect conclusions about the average spending per customer if not addressed.

Outliers are not just statistical nuisances; they are critical indicators that require careful consideration. Whether they represent a data entry error, a rare event, or the dawn of a new trend, their proper handling can unveil the true narrative hidden within the data. By acknowledging and investigating these outliers, analysts can ensure that their models are robust and their insights are accurate, painting a complete picture of the data story.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (4)

When Data Tells a Different Story - Outliers: Outliers: The R squared Rebels in Data Analysis

6. How Outliers Challenge Predictive Models?

Predictive models

In the realm of data analysis, the presence of outliers can often be a double-edged sword. On one hand, they represent deviations from the norm, anomalies that can skew the results of predictive models and challenge the assumptions upon which these models are built. On the other hand, outliers can be harbingers of new discoveries, pointing to data points that may hold significant meaning or represent emerging trends. The R-squared value, a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable in a regression model, is particularly sensitive to outliers. This sensitivity can lead to what is known as the 'R-squared Rebellion', where the predictive power of a model is questioned due to the influence of these anomalous points.

From a statistical perspective, outliers can drastically reduce the R-squared value, indicating a poor fit of the model to the data. However, from a domain expert's point of view, these outliers may contain valuable information that could lead to a better understanding of the underlying phenomenon being studied. Therefore, it's crucial to examine outliers from multiple angles before deciding on their treatment in the analysis.

Here are some in-depth insights into how outliers challenge predictive models:

1. Impact on Model Accuracy: Outliers can cause a model to become overfitted or underfitted. An overfitted model may perform well on the training dataset but poorly on unseen data. Conversely, an underfitted model fails to capture the underlying trends of the data, including those indicated by outliers.

2. Influence on Coefficients: In regression analysis, outliers can have a disproportionate effect on the coefficients, leading to misleading interpretations of the data. For example, a single outlier in a dataset can significantly alter the slope of a regression line, affecting all predictions.

3. Effect on Assumptions: Many predictive models assume that the data follows a normal distribution. Outliers can violate this assumption, leading to incorrect conclusions. It's essential to test for normality and consider transformations or alternative models if outliers are present.

4. Challenges in Clustering: In unsupervised learning, such as clustering, outliers can lead to the creation of additional clusters or distort the shape and size of natural clusters, affecting the overall results of the analysis.

5. Detection and Treatment: Identifying outliers is not always straightforward. Techniques like the IQR (Interquartile Range) method or Z-scores can help detect them, but deciding whether to remove, adjust, or keep them requires careful consideration of the context.

To illustrate these points, let's consider an example from the field of real estate. Imagine a dataset of home prices where most homes are priced between $100,000 and $500,000, but there are a few mansions priced over $5 million. These mansions are outliers that could significantly affect the predictive model's accuracy. If the goal is to predict average home prices, the model might benefit from removing these outliers. However, if the goal is to understand the full range of the housing market, including luxury homes, then these outliers provide valuable information and should be included in the analysis.

Outliers are not inherently good or bad; their impact on predictive models depends on the context of the analysis and the goals of the researcher. By carefully examining outliers from different perspectives, data analysts can make informed decisions on how to handle these 'R-squared Rebels' and enhance the robustness of their predictive models.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (5)

How Outliers Challenge Predictive Models - Outliers: Outliers: The R squared Rebels in Data Analysis

7. Outliers That Changed the Game

Changed the Game

In the realm of data analysis, outliers are not just anomalies; they are the harbingers of new insights, the gatekeepers of innovation, and often, the silent disruptors that redefine the boundaries of what's considered normal. These statistical mavericks challenge the status quo, prompting analysts to look beyond the comfort of the central tendency and the predictability of the standard deviation. They compel us to ask the deeper questions: Why does this data point deviate so drastically? Is there an underlying pattern or cause that we're missing? Could this outlier be the key to unlocking a whole new understanding of our dataset?

1. The Pharmaceutical Breakthrough: In a study of drug efficacy, one outlier stood out, showing an unusually high response rate. Initially dismissed as an error, further investigation revealed a genetic marker that made certain patients more receptive to the treatment. This outlier led to personalized medicine approaches, tailoring treatments to individual genetic profiles.

2. The Financial Anomaly: A financial analyst observed an outlier in stock market data, a stock that moved contrary to market trends. This prompted a deeper dive into the company's practices, uncovering innovative strategies that made it recession-proof. Investors who recognized this outlier as an opportunity rather than a risk reaped significant rewards.

3. The Climate Change Indicator: In a temperature dataset spanning decades, a single year stood out with a drastic spike in global temperatures. While it could have been dismissed as an anomaly, it instead sparked a series of studies that confirmed the accelerating pace of climate change, influencing policy and awareness on a global scale.

4. The Social Media Sensation: When analyzing social media trends, one influencer's post metrics were off the charts. This outlier wasn't just a one-hit wonder but a reflection of a shifting cultural tide, signaling the rise of a new genre of digital content that would soon dominate the online landscape.

5. The Educational Outlier: In a national education assessment, one school's scores were dramatically higher than the rest. Rather than attributing it to chance, further investigation revealed innovative teaching methods that, when adopted more widely, revolutionized educational approaches and outcomes.

These case studies exemplify the transformative power of outliers. They remind us that in the vast sea of data, the most significant discoveries often come from the least expected places. By embracing outliers, we open ourselves to the potential of groundbreaking change and the evolution of knowledge across various domains. Outliers are not just statistical anomalies; they are the catalysts for progress, the sparks that ignite the flames of discovery, and the beacons that guide us toward uncharted territories in data analysis. They are, indeed, the R-squared rebels, defying the confines of correlation, and paving the way for a new era of analytical enlightenment.

8. Techniques for Cleaner Data

Outliers can significantly skew the results of data analysis, leading to misleading conclusions. These anomalous points can arise due to various reasons such as measurement errors, data entry errors, or genuine variability in the data. It's crucial to identify and mitigate the influence of outliers to ensure the integrity of data analysis. This can be approached from different perspectives, each offering unique insights into the nature of outliers and the best practices for dealing with them.

From a statistical standpoint, outliers can inflate or deflate key metrics, such as the mean and standard deviation, affecting the overall distribution of the data. In machine learning, outliers can lead to overfitting, where a model learns from the noise rather than the signal. In business analytics, outliers can indicate either errors or valuable insights into unusual but significant events.

Here are some techniques to mitigate the influence of outliers:

1. Trimming or Winsorizing: This involves removing the extreme values from the data set or capping them at a certain percentile. For example, in a dataset of test scores, if the top 5% are above 95 and the bottom 5% below 55, one might choose to cap the scores at these thresholds.

2. Transformation: Applying a mathematical transformation to the data can reduce the impact of outliers. Common transformations include the logarithm, square root, or box-Cox transformation. For instance, log transformation can be particularly effective when dealing with economic data that naturally exhibits exponential growth.

3. Robust Statistical Measures: Using median instead of mean, or robust measures like the interquartile range (IQR) instead of standard deviation, can provide a more accurate picture of the central tendency and spread of the data. For example, the median income of a neighborhood will not be as affected by a single billionaire as the mean income would be.

4. Z-Score Method: Calculating the Z-score for each data point can help identify outliers. A common threshold is a Z-score of 3 or -3, indicating that the data point is three standard deviations away from the mean. For example, in a factory setting, a production time with a Z-score of 3.5 might indicate a process anomaly.

5. IQR Method: The IQR is the range between the first quartile (25th percentile) and the third quartile (75th percentile). Data points that fall below Q1 - 1.5 IQR or above Q3 + 1.5 IQR are typically considered outliers. For instance, in real estate, a house price outside this range might be flagged for further investigation.

6. machine Learning algorithms: Some algorithms are inherently more robust to outliers. For example, Random Forest can handle outliers in the training data because it builds multiple trees and averages their predictions.

7. Domain-Specific Methods: In certain fields, there are established methods for dealing with outliers. For instance, in finance, the modified Dietz method is used to measure portfolio performance, accounting for the timing and size of cash flows, which can be outliers.

In practice, a combination of these methods is often employed to effectively clean the data. It's important to document the rationale behind the chosen method and the potential impact on the analysis. Moreover, it's essential to consider the context: sometimes, what appears to be an outlier may actually be a critical piece of information that should be investigated rather than discarded. For example, in fraud detection, those outliers could represent fraudulent activities.

Ultimately, the goal is to achieve cleaner data that accurately represents the underlying phenomena without being unduly influenced by anomalies. By carefully considering the nature of the data and the purpose of the analysis, one can choose the most appropriate techniques to mitigate the influence of outliers.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (6)

Techniques for Cleaner Data - Outliers: Outliers: The R squared Rebels in Data Analysis

9. The Paradox of Outliers in Data Analysis

Outliers in Data

Outliers in data analysis are akin to the mavericks of the dataset, marching to the beat of their own drum and often skewing the results in unexpected ways. They are the data points that do not fit neatly within the expected pattern of the rest and can significantly affect the outcome of statistical models, particularly the R-squared value, which measures the proportion of variance for a dependent variable that's explained by an independent variable in a regression model. While outliers can sometimes be dismissed as mere anomalies or errors, they often hold the key to deeper insights and hidden truths within the data. They challenge the status quo, prompting analysts to dig deeper and question their assumptions.

From different perspectives, outliers can be seen both as a nuisance and a boon:

1. Statistical Perspective: Statisticians often view outliers as noise that needs to be managed. They can distort the results of an analysis, leading to misleading conclusions. For example, a single outlier can significantly alter the mean of a dataset, whereas the median remains robust to such extreme values.

2. Business Perspective: In business analytics, outliers can signal critical anomalies, like fraudulent transactions or system errors. They can also highlight exceptional cases of success or failure that may warrant further investigation.

3. Scientific Perspective: Researchers may find outliers particularly intriguing, as they could represent groundbreaking discoveries or new phenomena. In the field of medicine, an outlier could indicate a rare side effect of a drug that requires immediate attention.

Examples serve to illuminate the paradoxical nature of outliers:

- In finance, an outlier in stock price movements might indicate insider trading, which, while skewing the data, also provides an opportunity for regulatory intervention.

- In healthcare, a patient with an atypical response to treatment might lead to the discovery of new medical insights or the need for personalized medicine.

Ultimately, the paradox of outliers lies in their dual role as both disruptors and enlighteners. They compel analysts to refine their models and question their hypotheses, driving the pursuit of a more nuanced understanding of the data. Outliers remind us that in the realm of data analysis, there is always more than meets the eye, and the stories they tell can lead to significant breakthroughs or improvements in various fields. The key is to approach them with a balance of skepticism and curiosity, ensuring they are neither dismissed out of hand nor accepted without scrutiny.

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (7)

The Paradox of Outliers in Data Analysis - Outliers: Outliers: The R squared Rebels in Data Analysis

Outliers: Outliers: The R squared Rebels in Data Analysis - FasterCapital (2024)
Top Articles
Latest Posts
Article information

Author: Mrs. Angelic Larkin

Last Updated:

Views: 6436

Rating: 4.7 / 5 (67 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Mrs. Angelic Larkin

Birthday: 1992-06-28

Address: Apt. 413 8275 Mueller Overpass, South Magnolia, IA 99527-6023

Phone: +6824704719725

Job: District Real-Estate Facilitator

Hobby: Letterboxing, Vacation, Poi, Homebrewing, Mountain biking, Slacklining, Cabaret

Introduction: My name is Mrs. Angelic Larkin, I am a cute, charming, funny, determined, inexpensive, joyous, cheerful person who loves writing and wants to share my knowledge and understanding with you.