R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (2024)

Table of Content

1. Understanding the Basics

2. Where It All Begins

3. Building the Foundation of R-squared

4. What Does It Really Tell Us?

5. From Data to R-squared

6. The Limitations of R-squared

7. R-squared in Different Models

8. Adjusted R-squared and Predictive Power

9. Real-World Applications and Case Studies

1. Understanding the Basics

R-squared is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. While it is a widely used metric to gauge the performance of a regression model, it is not without its critics and limitations. From one perspective, a high R-squared is often seen as indicative of a model that fits the data well. However, from another viewpoint, an over-reliance on R-squared can lead to models that are overfit to the data and may not perform well on unseen data. It's also important to note that R-squared alone cannot determine whether the coefficient estimates and predictions are biased, which is why it should be used in conjunction with other tools like residual plots.

1. Definition and Calculation: R-squared is calculated as the ratio of the explained variance to the total variance. It is expressed as a value between 0 and 1. The formula for R-squared is:

$$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$

Where \( SS_{res} \) is the sum of squares of the residual errors, and \( SS_{tot} \) is the total sum of squares.

2. Interpretation: An R-squared of 1 indicates that the regression predictions perfectly fit the data. On the other hand, an R-squared of 0 indicates that the model does not explain any of the variability of the response data around its mean.

3. Contextual Use: In the context of linear regression, R-squared is used to provide an estimate of the strength of the relationship between the model and the dependent variable. For example, in a model predicting house prices based on square footage, a high R-squared would indicate a strong relationship between square footage and price.

4. Limitations: One of the main criticisms of R-squared is that it can be artificially inflated by adding more predictors to the model, regardless of whether they are relevant to the outcome. This is why adjusted R-squared is often preferred in multiple regression analyses, as it adjusts for the number of predictors in the model.

5. Comparative Insights: From a data scientist's perspective, R-squared is a starting point for model evaluation but not the final word. They might look at other metrics like the akaike Information criterion (AIC) or bayesian Information criterion (BIC) for model selection. Meanwhile, a statistician might emphasize the importance of understanding the underlying assumptions of the regression model before relying on R-squared values.

6. Practical Example: Consider a simple linear regression where we are trying to predict a student's GPA based on the number of hours they study per week. If our R-squared value is 0.75, this means that 75% of the variability in GPA can be explained by the number of study hours. However, this also means that 25% of the variability is unexplained, which could be due to factors like teaching quality or student health.

While R-squared can provide valuable insights into the explanatory power of a regression model, it should be interpreted with caution and in the context of the model's purpose and the data's underlying structure. It is a useful but limited tool in the data analyst's toolkit, and its interpretation can vary significantly depending on the lens through which it is viewed.

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (1)

Understanding the Basics - R squared: R squared: The Story of Variability and Sum of Squares

2. Where It All Begins

Variability is the heart of statistical analysis, representing the spread of data points around a central value. It's the essence of diversity in data, the spice that gives flavor to the blandness of uniformity. Without variability, there would be no need for statistics, as every observation would be identical, and every experiment would yield the same result. But where does this variability originate? The genesis of variability is a complex interplay of factors that can be both intrinsic and extrinsic to the data set.

From a statistical perspective, variability can arise from measurement errors, natural fluctuations in the system being observed, or differences in the subjects of study. For example, in a clinical trial, the variability in patient responses to a drug might be due to individual genetic differences, lifestyle factors, or even the precision of the measurement tools used to assess outcomes. In manufacturing, variability in product dimensions might stem from machine calibration, raw material quality, or the skill level of the operator.

1. Measurement Error: This is the discrepancy between the measured value and the true value. For instance, if you're measuring the height of a group of people, the variability could come from the accuracy of the measuring tape or the consistency with which measurements are taken.

2. Natural Fluctuations: These are the inherent variations found in nature. Take, for example, the yield of a crop from year to year; despite consistent farming practices, the output will vary due to weather conditions, soil fertility, and pest infestations.

3. Subject Differences: In any study involving humans, animals, or even machines, there will be variability due to individual characteristics. If we consider test scores among students, the variability reflects differences in study habits, prior knowledge, and test-taking skills.

4. Experimental Conditions: Even in controlled experiments, slight changes in conditions can introduce variability. For example, a slight fluctuation in temperature or humidity in a lab setting can affect the outcome of chemical reactions.

5. Sampling: The way in which samples are drawn from a population can introduce variability. If the sampling method is biased or the sample size is too small, the variability observed may not accurately reflect the population's variability.

To illustrate these points, let's consider the field of agriculture. A farmer sowing seeds of the same crop across different fields will observe variability in the harvest. This variability could be due to differences in soil quality (natural fluctuations), the precision of the sowing equipment (measurement error), or even the genetic diversity of the seeds themselves (subject differences). Each of these factors contributes to the overall variability observed in the crop yield.

Understanding the genesis of variability is crucial for interpreting data correctly. It allows statisticians to separate the signal from the noise, to identify patterns and trends amidst the chaos of diverse data points. By embracing and investigating variability, we gain insights into the underlying processes and can make informed decisions based on statistical evidence. Variability is not just a challenge to overcome; it's a phenomenon to understand, a puzzle to solve, and ultimately, a story to tell.

3. Building the Foundation of R-squared

In the realm of statistics, the concept of R-squared emerges as a pivotal metric, offering a window into the proportion of variance in the dependent variable that can be explained by the independent variables in a regression model. At the heart of this metric lies the sum of squares, a foundational element that partitions the total variability into components that provide insights into the effectiveness of the model. The sum of squares is not just a mere calculation; it's a narrative of the data's journey, telling us how much of the story is captured by our model and how much is still left untold.

1. Total Sum of Squares (TSS): It encapsulates the total variance in the response variable. Imagine you're measuring the heights of a group of people. The TSS would represent the variability of their heights from the average height of the group. Mathematically, it's the sum of the squared differences between each observation and the overall mean:

$$ TSS = \sum_{i=1}^{n} (y_i - \bar{y})^2 $$

2. explained Sum of squares (ESS): This portion reflects the variance explained by the model. If we predict everyone's height using the average, the ESS would be how much better our model does compared to this naive approach. It's the sum of the squared differences between the predicted values and the overall mean:

$$ ESS = \sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2 $$

3. residual Sum of squares (RSS): The unexplained variance, or the error, is what the RSS represents. It's the discrepancy between the observed values and the ones predicted by our model. In our height example, it would be the sum of the squared differences between each person's actual height and the height predicted by our model:

$$ RSS = \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 $$

Example: Consider a simple linear regression where we're trying to predict a student's test score based on the number of hours they studied. The TSS tells us how varied the test scores are. The ESS shows how much of this variation we can explain by the hours studied, and the RSS reveals the variation in scores that our model can't explain—perhaps due to factors like test anxiety or prior knowledge.

The sum of squares is not just a set of calculations but a narrative tool that helps us understand the extent to which our model captures the underlying patterns in the data. It's a testament to the power of statistical analysis in uncovering the hidden stories within numbers.

4. What Does It Really Tell Us?

R-squared is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. While it is a widely used metric to gauge the performance of a regression model, it's crucial to understand what R-squared really tells us and, perhaps more importantly, what it does not.

From one perspective, a high R-squared value can be interpreted as a model with a good fit, suggesting that the model explains a large portion of the variability in the response data. However, this interpretation comes with caveats. For instance, R-squared alone doesn't indicate whether the independent variables are a cause of the changes in the dependent variable; it simply measures the association between the two.

1. The Scale of R-squared:

R-squared values range from 0 to 1, where 0 indicates that the model explains none of the variability of the response data around its mean, and 1 indicates that the model explains all the variability of the response data around its mean.

2. The Misleading Nature of R-squared:

It's possible to have a high R-squared for a model that does not fit the data well. This can happen when the model is overfitted, meaning it's too complex and captures the random noise in the data as if it were part of the underlying relationship.

3. The Context Dependency of R-squared:

The usefulness of R-squared is context-dependent. In fields where controlled experiments are common, such as in agriculture or manufacturing, high R-squared values are often expected. In contrast, in fields like social sciences, a lower R-squared is more common and acceptable.

4. R-squared in Simple vs. Multiple Regression:

In simple linear regression, R-squared is the square of the correlation between the response and the predictor. In multiple regression, it's the square of the correlation between the observed and predicted values of the dependent variable.

5. Adjusted R-squared:

Adjusted R-squared is a modified version of R-squared that has been adjusted for the number of predictors in the model. It increases only if the new term improves the model more than would be expected by chance.

6. R-squared and Non-linear Relationships:

R-squared is most appropriate for linear relationships. Non-linear relationships can have a low R-squared value even when the model fits the data well.

7. R-squared and Dataset Size:

The size of the dataset can affect R-squared. With larger datasets, it's easier to get a high R-squared simply because there's more data to explain the variance.

Examples to Highlight Ideas:

- Example of Scale Interpretation:

Consider a model with an R-squared of 0.85. This suggests that 85% of the variance in the dependent variable can be explained by the model. However, this doesn't mean the model is 85% accurate in predicting outcomes.

- Example of Misleading Nature:

A complex polynomial regression might yield an R-squared of 0.95, but when applied to new data, it performs poorly because it was modeling the noise, not the underlying relationship.

- Example of Context Dependency:

In psychology, a study that results in an R-squared of 0.3 might be considered quite successful, given the variability of human behavior, whereas in physics, such a low R-squared would likely be unacceptable.

- Example of Adjusted R-squared:

If adding a new predictor to a model only increases the R-squared from 0.75 to 0.751, the adjusted R-squared will likely decrease, indicating that the new predictor is not adding significant explanatory power to the model.

While R-squared can provide valuable insights into the variability explained by a model, it should be interpreted with caution and in conjunction with other metrics and domain knowledge. It is not a definitive measure of a model's validity or predictive power and should not be used in isolation to make decisions.

Free enterprise empowers entrepreneurs who have ideas and imagination, investors who take risks, and workers who hone their skills and offer their labor.

5. From Data to R-squared

In the realm of statistical analysis, the journey from raw data to the computation of R-squared is a tale of transformation and interpretation. This saga begins with a dataset, often a collection of observations that represent a slice of reality. Each number, each entry in this dataset, holds a story, a piece of the puzzle that analysts and statisticians strive to understand. The process is meticulous, involving various steps of cleaning, sorting, and manipulating data to extract meaningful patterns. As we delve into this narrative, we encounter the concept of variability, which is the essence of data's dynamism. Variability captures the essence of difference – how much individual data points diverge from the average.

To truly appreciate the calculation of R-squared, we must first understand the sum of squares, a critical component in the analysis of variance. The sum of squares quantifies the total variation within a dataset, providing a foundation upon which further analysis is built. It is the sum of the squared differences between each observation and the overall mean, serving as a numerical depiction of dispersion. From this starting point, we embark on a quest to dissect and comprehend the intricacies of this pivotal statistic.

1. Total Sum of Squares (TSS): TSS represents the total variance in the response variable. It is calculated as the sum of the squared differences between each observation and the grand mean. For example, if we have a dataset of test scores, TSS would quantify how much the scores of all students deviate from the average score of the group.

2. Regression Sum of Squares (RSS): RSS, also known as the explained sum of squares, measures the amount of variance explained by the model. It is the sum of the squared differences between the predicted values and the grand mean. In our test score example, RSS would represent how well a student's hours of study explain the variance in their test scores.

3. Residual Sum of Squares (SSR): SSR, or the unexplained sum of squares, captures the variance that the model fails to explain. It is the sum of the squared differences between the observed values and the predicted values. Continuing with the test scores, SSR would reflect the portion of a student's score that cannot be accounted for by their hours of study alone.

4. Coefficient of Determination (R-squared): R-squared is the ratio of RSS to TSS, representing the proportion of variance explained by the model. It ranges from 0 to 1, where 0 indicates that the model explains none of the variability, and 1 indicates perfect explanation. For instance, an R-squared value of 0.75 in our test score scenario suggests that 75% of the variance in test scores can be explained by the hours of study.

To illustrate, let's consider a simple linear regression where we predict a student's test score based on their hours of study. Suppose the average test score is 70 out of 100, and we have five students with scores ranging from 65 to 85. The TSS would be the sum of the squared differences between each student's score and the average score. If our linear model predicts scores with reasonable accuracy, the RSS would be high, indicating that hours of study are a good predictor of test scores. Conversely, a low SSR would imply that most of the variance in scores is captured by the model, leaving little unexplained.

In summary, the calculation saga from data to R-squared is a narrative of uncovering the hidden stories within numbers. It is a testament to the power of statistical tools to bring clarity to the chaos of data, allowing us to quantify the unquantifiable and make informed decisions based on empirical evidence. As we journey through this saga, we gain not only numerical insights but also a deeper understanding of the phenomena we seek to explain.

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (2)

From Data to R squared - R squared: R squared: The Story of Variability and Sum of Squares

6. The Limitations of R-squared

R-squared is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. While it is a widely used metric to gauge the fit of a regression model, relying solely on R-squared to assess model performance can be misleading. It's important to understand that a high R-squared value does not necessarily imply a causal relationship, nor does it guarantee the accuracy of predictions.

From a statistician's perspective, R-squared is only one part of the story. It doesn't account for whether the independent variables are a cause of the changes in the dependent variable; it simply measures how well the model explains the data. This is why it's crucial to look beyond R-squared and consider other statistics like the Adjusted R-squared, which adjusts for the number of predictors in the model, or the p-value, which can indicate whether the observed relationships are statistically significant.

Here are some limitations of R-squared that are essential to consider:

1. Sensitivity to Extreme Values: R-squared is sensitive to outliers. A single outlier can significantly increase the R-squared value, giving a false sense of a good fit.

2. Doesn't Indicate Predictive Accuracy: A high R-squared does not necessarily mean that the model has good predictive accuracy. For example, in time-series data, a model might have a high R-squared but poor out-of-sample predictive power.

3. Not Suitable for All Models: R-squared is not appropriate for all types of regression models. For instance, in logistic regression, which is used for binary outcomes, R-squared is not a valid measure of model fit.

4. No Information on Bias: R-squared does not convey whether a model is biased; it does not distinguish between systematic overpredictions or underpredictions.

5. Scale-Dependence: The value of R-squared is not scale-invariant. It can be artificially inflated when variables are transformed, such as by squaring or taking logarithms.

6. Ignores the Size of Errors: R-squared does not reflect the magnitude of errors; it only accounts for the proportion of explained variance.

To illustrate these points, let's consider an example. Suppose we have a dataset of housing prices and we build a regression model to predict prices based on features like square footage, number of bedrooms, and location. If we include an outlier, such as a mansion priced significantly higher than other houses in the dataset, our R-squared might suggest a great fit. However, if we remove the outlier, the R-squared might drop, indicating that the model isn't as robust as we thought.

In another scenario, if we're working with time-series data, such as predicting stock prices, a model might have a high R-squared when tested against historical data. But when we try to use the model to predict future prices, it might perform poorly because the conditions that existed during the model-building phase have changed.

Therefore, while R-squared can be a useful indicator of model fit, it should be used in conjunction with other metrics and a thorough understanding of the data and the model's assumptions. It's part of a broader analytical process that involves checking residuals, considering alternative models, and validating the model against new data. By doing so, we can build models that are not only statistically sound but also practically useful.

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (3)

The Limitations of R squared - R squared: R squared: The Story of Variability and Sum of Squares

7. R-squared in Different Models

R-squared, or the coefficient of determination, is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. While it is widely used to gauge the accuracy of a model, it's important to understand that a higher R-squared value does not always indicate a better model. Different models can yield varying R-squared values depending on the data and the type of regression used. Therefore, a comparative analysis of R-squared across different models can provide valuable insights into their performance and suitability for specific datasets or research questions.

1. Simple Linear Regression: In simple linear regression, where the model predicts the dependent variable based on a single independent variable, the R-squared value is a clear indicator of how well the linear model explains the variability of the data. For example, if a model predicting house prices based on square footage has an R-squared of 0.65, it means that 65% of the variability in house prices can be explained by the square footage alone.

2. multiple Linear regression: As we introduce more variables into a regression model, the R-squared generally increases because the model has more information to explain variability. However, this doesn't necessarily mean the model is better. For instance, adding a variable that has little to no relationship with the dependent variable can inflate the R-squared without truly improving the model's predictive power.

3. polynomial regression: Polynomial regression models, which fit a nonlinear relationship between the independent and dependent variables, can have very high R-squared values. This is because they can capture more complex patterns in the data. However, a high R-squared in such models could also indicate overfitting, where the model is too closely tailored to the training data and may not perform well on new, unseen data.

4. Ridge and Lasso Regression: These regularization techniques are used to prevent overfitting in models with large numbers of predictors. They can result in lower R-squared values compared to simple linear regression models, but this trade-off often leads to better generalization and performance on test data.

5. time Series models: In time series analysis, R-squared is less commonly used because these models often focus on capturing trends, seasonality, and cycles rather than explaining variability. For example, an ARIMA model's performance is usually assessed using other metrics like the Akaike Information Criterion (AIC) rather than R-squared.

While R-squared is a useful statistic for comparing the explanatory power of regression models, it should not be the sole criterion for model selection. Analysts should consider the context, the complexity of the data, and the purpose of the model when interpreting R-squared values. It's also crucial to look at other performance metrics and conduct cross-validation to ensure that the model not only explains the past but can also predict the future accurately.

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (4)

R squared in Different Models - R squared: R squared: The Story of Variability and Sum of Squares

8. Adjusted R-squared and Predictive Power

Adjusted R Squared

Predictive power

In the realm of statistical analysis, the concept of R-squared is a cornerstone, providing a measure of how well the independent variables in a regression model explain the variability of the dependent variable. However, as we delve deeper into the intricacies of regression analysis, we encounter the Adjusted R-squared, a refinement of R-squared that accounts for the number of predictors in the model. This adjustment is crucial because adding more predictors to a model will always increase the R-squared value, even if those predictors are not statistically significant. The Adjusted R-squared compensates for this by incorporating the number of predictors and the sample size into its calculation, thus providing a more accurate measure of the model's predictive power.

From a different perspective, the predictive power of a model is its ability to make accurate predictions on new, unseen data. It's not just about having a model that explains the past well; it's about having a model that can predict the future effectively. Here, the Adjusted R-squared shines by penalizing complexity and rewarding simplicity, aligning with the principle of parsimony or Occam's razor in model selection.

1. Calculation of adjusted R-squared: The formula for Adjusted R-squared is $$1 - (1-R^2)\frac{n-1}{n-p-1}$$ where (R^2) is the R-squared, (n) is the sample size, and (p) is the number of predictors. This formula adjusts the R-squared based on the number of predictors and the sample size, preventing overfitting.

2. Interpretation: A higher adjusted R-squared indicates a model with better explanatory power that has not been unduly influenced by the number of predictors. It's a more reliable statistic for comparing models with different numbers of predictors.

3. Use in Model Selection: When choosing between models, the one with the higher Adjusted R-squared is generally preferred, assuming that other diagnostic tests (like F-tests or t-tests) also support its validity.

4. Examples:

- Consider a model predicting house prices based on square footage and number of bedrooms. If we add a predictor like the color of the front door, the R-squared might increase slightly due to chance, but the Adjusted R-squared will likely decrease, signaling that the new predictor does not improve the model's predictive capability.

- In a study examining the factors affecting crop yield, variables like rainfall, fertilizer use, and seed variety might all contribute significantly to the R-squared. However, if we add a variable like the phase of the moon, it's unlikely to be significant, and the Adjusted R-squared will help in identifying its lack of contribution.

By considering Adjusted R-squared, analysts and researchers can make more informed decisions about the inclusion of variables in their models, leading to more robust and generalizable findings. It's a step towards ensuring that a model is not just a good fit for the data it was trained on, but also holds predictive power for new data it has yet to encounter. This balance between explanation and prediction is the hallmark of a well-constructed regression model.

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (5)

Adjusted R squared and Predictive Power - R squared: R squared: The Story of Variability and Sum of Squares

9. Real-World Applications and Case Studies

Applications and Case

World Applications Case

Applications Case Studies

World Applications Case Studies

R-squared, or the coefficient of determination, is a statistical measure that represents the proportion of the variance for a dependent variable that's explained by an independent variable or variables in a regression model. While the concept might seem abstract or theoretical, its practical applications are vast and varied, touching numerous fields such as finance, healthcare, engineering, and social sciences.

In finance, for example, R-squared is used to quantify how well a portfolio's movements can be explained by the movements in a benchmark index. A high R-squared, close to 1, indicates that a large portion of the portfolio's performance can be explained by the index's returns. This is particularly insightful for investors who are looking to understand the performance of mutual funds against their benchmarks.

From a healthcare perspective, R-squared plays a crucial role in epidemiological studies. When assessing the impact of certain risk factors on health outcomes, researchers rely on R-squared to determine how much of the outcome variability can be attributed to variations in the risk factors under study. This can be instrumental in understanding the effectiveness of interventions or in identifying the most significant predictors of health outcomes.

In the realm of engineering, R-squared is pivotal in quality control processes. Engineers use regression models to predict the outcomes of production processes and rely on R-squared to assess the model's accuracy. A high R-squared value would indicate that the model can reliably predict production outcomes, which is essential for process optimization and cost reduction.

Social scientists often turn to R-squared to evaluate models that explain social phenomena. For instance, a study on the impact of education on income levels would use R-squared to determine how much of the variability in income can be explained by educational attainment.

Here are some in-depth insights into the real-world applications of R-squared:

1. Portfolio Management: An R-squared value close to 1 suggests that the portfolio's returns are largely in sync with the market. This can be a double-edged sword; on one hand, it implies lower risk since the portfolio is moving with the market, but on the other hand, it also means limited opportunities for above-market returns.

2. predictive Analytics in retail: Retail giants use R-squared to understand customer behavior. By analyzing sales data, they can predict future trends and stock accordingly. A high R-squared in this context means that the predictive model accounts for most of the variability in sales, leading to more accurate inventory management.

3. Climate Change Research: Scientists use R-squared to assess the accuracy of climate models. These models predict future temperatures based on various factors like CO2 emissions. A high R-squared value would indicate a strong relationship between the factors and temperature changes, bolstering the model's credibility.

4. real Estate pricing: Real estate analysts use regression models to price properties, considering factors like location, size, and amenities. R-squared helps in understanding how much of the property prices can be explained by these factors, which is crucial for both buyers and sellers in making informed decisions.

To illustrate with an example, consider a study aiming to understand the factors affecting house prices in a city. The regression model might include variables such as square footage, number of bedrooms, proximity to schools, and crime rates. An R-squared value of 0.85 would suggest that 85% of the variability in house prices can be explained by these variables, providing a solid foundation for pricing strategies.

R-squared is more than just a statistical measure; it's a bridge between data and decision-making across various industries. Its ability to quantify the explanatory power of models makes it an indispensable tool in the arsenal of analysts, researchers, and professionals seeking to make data-driven decisions. Whether it's understanding market movements, predicting health outcomes, optimizing manufacturing processes, or explaining social trends, R-squared serves as a key indicator of model reliability and effectiveness.

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (6)

Real World Applications and Case Studies - R squared: R squared: The Story of Variability and Sum of Squares

R squared: R squared: The Story of Variability and Sum of Squares - FasterCapital (2024)
Top Articles
Latest Posts
Article information

Author: Wyatt Volkman LLD

Last Updated:

Views: 6434

Rating: 4.6 / 5 (66 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Wyatt Volkman LLD

Birthday: 1992-02-16

Address: Suite 851 78549 Lubowitz Well, Wardside, TX 98080-8615

Phone: +67618977178100

Job: Manufacturing Director

Hobby: Running, Mountaineering, Inline skating, Writing, Baton twirling, Computer programming, Stone skipping

Introduction: My name is Wyatt Volkman LLD, I am a handsome, rich, comfortable, lively, zealous, graceful, gifted person who loves writing and wants to share my knowledge and understanding with you.