Trending February 2024 # How Does An Accrued Liability Work With Example? # Suggested March 2024 # Top 6 Popular

You are reading the article How Does An Accrued Liability Work With Example? updated in February 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 How Does An Accrued Liability Work With Example?

Definition of Accrued Liabilities

Accrued liabilities are expenses incurred by the business but not yet paid. Accrued expense is a part of the accrual system of accounting, which states that an expense is recorded when it is incurred, and revenue is recorded when it is earned.

Start Your Free Investment Banking Course

This accounting system generates more accurate results as the expenses are matched with related revenues and are reported when the expense occurs, not when cash is paid.


Accrued liabilities are the actual liabilities, the benefit against which is received by the business, but they are not yet paid. For example, services of the employees have been received, but their salary is yet to be paid, or goods have been received, but payment is yet to be made. Here, the price has to be made in a future period. If we don’t record such expenses in our books, it will not reflect an accurate financial picture of the company’s business. Accrual liabilities only occur when the business follows the Accrual accounting system.

An Accrual accounting system is a system in which all the expenses of a particular period are recorded in the same period in which they are incurred, irrespective of whether they are paid or not. Accrued liabilities are not accounted for when the business follows the cash basis of accounting. Under the Cash basis of accounting, only those expenses accounted for are paid during that period. A company’s balance sheet shows accrued liabilities under the current liabilities head.

How Does an Accrued Liability Work?

Accrued liabilities are recorded in the books of accounts at the end of the accounting period, and they are reversed in the period when they are paid. It is like a temporary account created in the books of accounts.

Expense A/c – Debit (Recording the actual expense)

Accrual Liability A/c – Credit (expense not paid hence liability created)

Accrual Liability A/c – Debit (Reducing the liability settled)

Cash A/c – Credit (Liability settled by paying cash)

The Accrued liabilities balance in the balance sheet will be reduced after payment.

Example of Accrued Liabilities

Suppose Company ABC Ltd. closes its books of accounts on the 31st of December every year. The company makes salary payments to all its employees on the 5th of next month. So, the salary for December month will be paid on the 5th of January of the next fiscal year, i.e., 2023. December month’s salary to be paid is $50,000.

In this case, the liability to pay the employees has been incurred, but the payment is not yet done. Hence, salary expenses will be recorded, and an opposite accrued liability for the same will be created in the books of accounts, and the same will be reversed next month.

Types of Accrued Liabilities

There are two types:

Recurring accrued liabilities: These are the ones that occur in the ordinary course of business. For example, wages for the current month are paid on the 10th of next month.

Non-recurring accrued liabilities: These are not regular accrued liabilities and do not occur in the ordinary course of business. For example, heavy machinery was purchased, but payment was not yet made.

Accrued Liabilities vs Accounts Payable

The primary difference between accrued liabilities and accounts payable is that the accounts payable are billed to the company, but accrued liability is not yet billed.

Another difference is that the accounts payable are a liability that will be paid soon. On the other hand, accrued liability is generally accrued and paid over some time.

Accounts payable are recorded for any expense billed like supplier invoice, vendor payment, etc. At the same time, accrued liabilities are recorded at the end of the fiscal year.

For example, a company purchases machinery from a supplier on the 30th of December 2023, the shipment of which will arrive in the next 15 days. If the company receives the invoice on or before the end of the accounting year, it will be booked as accounts payable. If the invoice is not received, it will be booked as an accrued liability.


Following the accrual accounting system, it gives an accurate and fair picture of the company’s financial position. All the revenues of the current period and the expenses incurred to earn those revenues are recorded in the same accounting period. It will reflect the actual profit/loss of the company.

The Accrual accounting system is a preferred accounting system by the Financial Accounting Standard Board (FASB).

Financial statements prepared using the accrual accounting system are comparable to the cash accounting system.

It is a complex accounting system and requires competent personnel who can track and report transactions promptly.

It does not provide an accurate picture in terms of sales and cash. The company’s sales may be much higher than its actual cash position.

It does not benefit small businesses where most transactions are done on a cash basis.


Accrued liabilities or accrued expenses are expenses incurred by the business in one period, but the payment will be made in another period. This way of recording the expenses gives us an accurate picture of accounting.

Recommended Articles

You're reading How Does An Accrued Liability Work With Example?

How Does Numpy.mean() Work With Example

Introduction to numpy.mean()

Numpy.mean() is function in Python language which is responsible for calculating the arithmetic mean for the all the elements present in the array entered by the user. Simply put the functions takes the sum of all the individual elements present along the provided axis and divides the summation by the number of individual calculated elements. The axis along which the calculation is made has to be prespecified or else the default value for axes will be taken.

Start Your Free Software Development Course

Syntax and Parameters

The following is the syntax that displays how to implement numpy.mean().

The syntax entered by the user is sent in terms of float * 64 intermediate and there by returns the value for the associated integers corresponding for the mean value.

The parameter used in the Syntax for using numpy.mean()

a *: *array *_ *like *

The array is being entered by the user or prompted to be entered. In case the array entered is not of an integer data type, then the conversion of the form is tried on the data entered.

axis : None *, *  *int *, *  *tuple * (optional parameter)

The computation of the axis along the elements of the specified array entered by the user is done. By default, the mean of the pre-flattened array is computed. In case the array entered is a tuple, in such a case the mean is computed over various axes of the array.

 dtype * *: * *data *– *type *, (parameter is optional)

For the computation of the mean the parameter type is utilized. By default, the float 64data type is used for arrays with integer data sets. In case the data being input is floating it remains the same as the dtype entered.

out : ndarray, (parameter is optional)

keepdims: bool, (parameter is optional)

If the parameter specified is True, the axis or axes which are deduced are kept in the expected result as the dimensions having size one. The option enables the result to be broadcasted correctly in response to the array which has been entered. In case value by default, a parameter is passed then the keepdims parameter would not be passed on to the method-specific for mean with respect to the array and its sub-classes. However, it must be noted that for non-default values passed the keepdims parameter would be applicable to raising exceptions if any.

m : ndarray

If the parameter out=None, then in such a case a new array is returned which contains the mean values. Else, in such cases, the reference values with respect to the elements if retuned.

Example to Implementation NumPy.mean()

Below are the examples mentioned:


import numpy as n1 a1 = n1.array([[10,20,30],[30,40,50],[40,50,60]]) print 'The new array entered by the user is:' print a1 print 'Application of the Numpy.mean() function on the array entered:' print n1.mean(a1) print 'Application of the mean() function alongside the axis - 0:' print n1.mean(a1, axis = 0) print ' Application of the mean() function alongside the axis - 1:' print n1.mean(a1, axis = 1)

The following output would be produced for the code specified above:

How Does the numpy.mean() Work?

The function scans through the values which are specified in the array which is provided by the user. It firstly tries to flatten the resultant array before the computation of the arithmetic mean on the same. The below diagrammatic systemic representation shows the function actually executes the calculation:

We can use the NumPy mean function to compute the mean value:

As the function for mean travels through various axis or axes provided by the user, it scan through and tries to integrate the arithmetic mean functionality for all integral values, Where the elements do not match up to be integral data type, it tries to convert such numbers.

Here you can see for a single dimensional array with six specified elements, the functions scans each of the elements and then divides the total summation of the elements by the total number of elements present in the array (here 6).

This way for arrays with multiple dimensions all or specified axis is mentioned along which the mean is calculated which is displayed in an array form for more than one-dimensional arrays.


The function mean() in NumPy is very useful for calculating the arithmetic average of elements especially in terms of data given in array subsets. This being calculated through manual code impacts the verbosity of the code and thus impacts on the computation time for long codes with large data sets.

Recommended Articles

This is a guide to numpy.mean(). Here we discuss the introduction and working of numpy.mean() along with different examples and its code implementation. You may also look at the following articles to learn more –

How Does Double Taxation Works With Example?

Definition of Double Taxation

Double taxation occurs when the same income is subject to tax twice– either in the hands of two different taxable parties in the same country (called Economic Double Taxation) or in the hands of the same taxable party across two taxable jurisdictions or countries (called Juridical Double Taxation).


Download Corporate Valuation, Investment Banking, Accounting, CFA Calculator & others

Corporations may be obligated to pay taxes on their profits, and a portion of those profits can also be subject to taxation for shareholders as dividend income.

How Does It Work?

Economic Double Taxation: Corporations must pay taxes based on their earnings for the period. Entities often distribute a portion of their profits to their shareholders as dividends. Shareholders must pay personal income tax on compensation from their investments in such corporations. Consequently, the company and the individual shareholder end up paying taxes on the same amount, as the company’s earnings are distributed as dividends.

Juridical Double Taxation: This kind of taxation occurs when the country of origin and residence are taxable entities. In the case of a company headquartered in the United States of America with its operations spread across the United Kingdom, for example, the company’s arm in the UK would be earning profits in the UK, which would be repatriated to the company’s home country – the USA. As a result, the UK arm would be required to pay taxes on its earnings in the UK. While the profits repatriated to the USA might also be taxed as per US taxation laws, leading to the profits being taxed twice – in the UK and the USA.

Example of Double Taxation

Different examples are mentioned below:

Economic Double Taxation

Juridical Double Taxation

Income Exempt: Repatriated profits may be exempt from tax in the US. Accordingly, the corporation would have no tax liability on the profits repatriated to the US.

Tax Credit: Corporation ‘A’ may be allowed a tax credit of $200,000 (the amount of tax paid in the UK on the same profits). As a result, the company’s tax liability on the repatriated profits may reduce to $50,000 as opposed to $250,000 had the income been fully taxed without the benefit of a tax credit.

Concessional Tax Rate: Repatriated profits may be taxed lower than the rate applicable to other corporation earnings, resulting in tax liability lower than $250,000.

Double Taxation Agreement

To avoid Juridical Double Taxation, various countries have entered into treaties to prevent it, often based on guidance provided by the Organization for Economic Cooperation and Development. Double Taxation Agreements (DTAs) are structures to avoid double taxation on international earnings of taxable entities, facilitate a better exchange of information between countries, promote improved trade relations, and prevent tax evasion.

Some of the common ways in which DTAs may provide relief from it are as follows –

The country of residence allows for income exemption from another country if the tax is paid in the country of origin.

They are taxing the income in both countries and providing tax credits in the country of residence to the extent of taxes paid in the country of origin.

We are allowing for lower or concessional rates of taxes in either of the taxing countries.

DTAs may contain different methods of claiming tax relief for different types of income.

How to Avoid Double Taxation?

Tax laws usually prevent double economic taxation through lower tax rates or tax credits. For example, individual taxpayers benefit from lower tax rates on qualified dividends in the US compared to the rates applicable to their regular income.

Under Juridical Double Taxation, the country of origin may exempt income if it has been taxed in the foreign country, or the country of residence may provide tax credits for international income or apply concessional tax rates, as per the DTA between the countries.


Benefits to the Tax Payers

DTAs help reduce the tax burden that they would otherwise incur on account of double taxation by any of the following means (as provided for in the DTA) on their international income:

Foreign-source income is exempt

Foreign-source income is taxed at concessional rates

Tax credits or refunds of tax paid in the other country

Benefits to the nations involved in the DTA

Promotes better trade and investment relations between the nations

Allows for more transparency in the flow of transactions between the nations

Better information sharing between nations can help prevent or detect tax evasion.


It refers to the taxation of the same income twice, leading to higher-than-normal taxes levied on the same income on a macro level. Tax regulations and DTA allow for lower tax rates and tax credits to provide for double taxation. Thus, DTAs help relieve Juridical Double Taxation and allow for better trade relations and information sharing between countries.

Recommended Articles

How Does An Ai Algorithm Work? 6 Problems Solved With Ai Algorithm

Here is how an AI algorithm works.

In fact, as time passes, these kinds of coding instructions have gotten much more comprehensive and complex than anyone could have anticipated.

Problems Solved by Using AI Algorithms

There are so many issues that have been solved utilising


Using an AI algorithm has the particular benefit of making it easier to sift through large volumes of data in a relatively short time. Medical researchers may sift through enormous quantities of data using specialised software to uncover connections that might lead to cures, the creation of life-saving technology, vaccination integration, and more.  

Energy Public Safety

Another fascinating use of AI algorithms is in our traffic network. You’ll understand how this sort of programming is used if you’ve ever pondered how a red light adjusts based on traffic flow or how certain big cities may automatically modify traffic based on emergency situations.  

Global Warming

Those anxious about the status of our world and global warming will be relieved to learn that


AI algorithms are becoming increasingly prevalent in this field. There are numerous difficulties that this level of technology has handled, making this the simplest period in history to communicate with one another, from how we use the internet to how we can make a call using a phone.  


AI algorithms are also used by governments on a daily basis. Although much about how the US federal government handles personal data is unclear, computer software surveillance of specific aspects and communications has resulted in the prevention of significant terrorist acts both at home and abroad. That’s just a taste of the ever-evolving and ever-expanding ways humans are utilising AI to widen our horizons and make things easier, safer, and more pleasurable for future generations.  

Top 5 AI Algorithms 2023 1. Linear Regression

Consider how you would stack random logs of wood in ascending order of their weight to see how this method works. But there’s a catch: you can’t weigh each log. You must estimate its weight based on the log’s height and girth (optical analysis) and arrange it based on the integration of these visible factors. This is how machine learning works using linear regression.  

2. Logistic Regression

From a set of independent variables, logistic regression is used to calculate discrete values (typically binary values like 0/1). By comparing data to a logit function, it aids in predicting the likelihood of an event. It’s sometimes referred to as logit regression.  

3. Decision Tree

The Decision Tree method is one of the most widely used machine learning algorithms today; it is a supervised learning technique for categorising issues. It is effective in categorising both category and continuous dependent variables. We divide the population into two or more homogenous sets using this technique based on the most important attributes/independent variables.  

4. SVM Algorithm

The SVM (Support Vector Machine) algorithm is a classification technique in which raw data is shown as points in an n-dimensional plane. Each feature’s value is subsequently linked to a specific coordinate, making data classification simple. Classifiers are lines that may be used to divide data and plot it on a grid.  

5. Naive Bayes Algorithm

The existence of one feature in a class is assumed to be independent of the presence of some other feature by a Naive Bayes classifier. Even though these characteristics are linked, a Naive Bayes classifier would examine each of them separately when computing the likelihood of a specific result. A Naive Bayesian model is simple to construct and may be used to analyse large datasets. It’s easy to use and has been shown to outperform even the most complex categorization algorithms.  


Exploratory Data Analysis With An Example

This article was published as a part of the Data Science Blogathon.


Exploratory Data Analysis helps in identifying any outlier data points, understanding the relationships between the various attributes and structure of the data, recognizing the important variables. It helps in framing questions and visualizing the results, paving the way to make an informed choice of the machine learning algorithm for the problem at hand.

While working on performing Exploratory Data Analysis, it is important that we keep our objective in mind. Plotting fancy graphs is not the aim but deriving useful insights is.

Keeping that in mind, in this article we would look into an example of Exploratory Data Analysis performed on Haberman’s survival dataset which is available on Kaggle.

The objective of this analysis is To find patterns within the dataset to gain further understanding of the data and leverage it to choose a machine learning algorithm for predicting the survival rates of patients who undergo the surgery.

The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago’s Billings Hospital on the survival of patients who had undergone surgery for breast cancer.

Data attributes:-

Age of patient at the time of operation (numerical)

Patient’s year of operation (year — 1900, numerical)

Number of positive axillary nodes detected (numerical)

Survival status (class attribute) 1 = the patient survived 5 years or longer, 2 = the patient died within 5 year

We start by loading the data into a data frame

df = pd.read_csv("/kaggle/input/habermans-survival-data-set/haberman.csv") df.shape

The data has 305 rows and 4 columns with no NULL values. The columns do not have a heading/title, hence we provide a meaningful title to the columns in our dataset.

df.columns = ["age",'year','nodes','status']



The data includes patients whose ages are ranging from 30 to 83 years.

The operation has been performed for years 1958 to 1969

The maximum number of lymph nodes in the dataset is 52, with an approximate mean value of 4

Let’s Look at the Data!

A quick look at the count of records for the attributes “age” and “year”(when the operation was performed) gives us the following insights.

We have more patients with a survival status of 1(those who survived for more than 5 years or longer) than patients with a survival status of 2(those who died within 5 years)

As the number of records existing for both survival rates has a major difference, our data is imbalanced.

In the year 1958, most of the operations were performed and the least operations were performed in 1969

print(df["status"].value_counts()) print(df["year"].value_counts()) Univariate Analysis

It is the simplest form of analyzing data, it uses only one variable hence the name, Univariate.

We would use Probability Density Function, Cumulative Distribution Function, Box Plots, and Violin Plots for our analysis

Probability Density Function

The probability density function(PDF) provides the probability of a random variable falling in the range of values.

We have plotted below the PDF of the age


Patients between the ages of 30–34 have survived for 5 years or longer.

Patients whose age is more than 75 died within 5 years of the operation.

The number of patients between the age of 40–50 is more for status 2(i.e those who died within 5 years)

The number of patients between the age of 35–40 is more for status 1(i.e who have survived for 5 years or more), patients within this age group have a good chance of survival post the operation.

PDF of Number of Nodes


We see that the data is overlapping but we can note that the survival rate is better in patients who have 0–2 nodes and the survival rate decreases as there is an increase in the number of nodes.

PDF of the year of operation


The data is overlapping but we can see that between 1963 and 1966 we have more survival data and between 1958–1961 we have more data on patients who died within 5 years of the operation.

Cumulative Distribution Function

It describes the probability that a random variable will be found at a value less than or equal to the point at which the CDF is calculated.

CDF of number of nodes


80% of the patients who survived had nodes less than 5.

Whereas 80% of the patients who could not survive had greater than 10 nodes.

CDF of Age


The CDF curve is highly overlapping but we can observe that 20% of surviving patients had an approximate age of 38, while 20% of patients who could not survive have an approximate age of 45

Box Plots

Help us in visualizing the distribution of data based on the quartiles and provide some indication of the data’s symmetry and skewness. Unlike many other methods of data display, boxplots show outliers.

Boxplot on Age


The 25th percentile of patients who have survived are of age approximate 42 years

The 25th percentile of patients who died are of age approximate 46 years

The 50th percentile of patients who have survived are of age approximate 52 years

The 50th percentile of patients who died are of age approximate 55 years

75th percentile of patients who have survived are of age approximate 60 years

75th percentile of patients who died are of age approximate 61 years

As we have noted before the data is overlapping to a great extent and hence we would not be able to draw an accurate conclusion on the basis of just the age of the patient.

Boxplot on Nodes


We can see that for the nodes attribute we have some outlier points.

75 percentile of patients who survived have less than 4 nodes whereas 50 percentile of patients who could not survive have 5 nodes

Having a low count of active nodes is definitely a contributing factor to the survival of a patient.

Bivariate Analysis

The aim is to find patterns/relationships within the dataset using two attributes. It is useful in testing simple associations.

One plot which can be used for the analysis is the pair plot.

Pair plots are an easy way to visualize relationships within your data. A matrix of each variable associated with another variable is produced for our analysis.

 Example of Pair Plots


Plot 2:- attributes:- age and year

The points are overlapping, due to which all points are not clearly visible on the plot, which makes it difficult to conclude.

Plot 3:- attributes:- age and nodes

The points are overlapping, due to which all points are not clearly visible on the plot, which makes it difficult to conclude. We can however see that patients with more number of nodes and high age are generally of status 2(who could not survive)

Plot 6:- attributes:- year and nodes

The points are overlapping, due to which all points are not clearly visible on the plot, which makes it difficult to conclude.

Multivariate Analysis

Contour plots can be used for multivariate analysis. They are used to represent a three-dimensional surface on a two-dimensional plane. One variable is represented on the horizontal axis and a second variable is represented on the vertical axis. The third variable is represented by a colour gradient.

A contour plot on attributes, age on the Y axis, year on the X-axis, and the third variable is status = 1(successful survival post 5 years of operation)


The patients who survived are mostly in the approximate age group of 45–55 within the years 1962–1964

A contour plot on attributes, age on the Y axis, year on the X-axis, and the third variable is status = 2(could not survive)


The patients who could not survive were in the approximate age group of 45–50 between the years 1962 and 1965


The dataset is imbalanced.

The data is highly overlapping for both the status which makes it difficult to implement a simple decision-making algorithm.

Less number of nodes is a contributing factor to the survival status of a patient.

Through our exploratory data analysis, we can deduce that since our dataset has highly overlapping attributes we would need to use a powerful machine learning algorithm for our objective of predicting the survival rates.

Hope you liked reading my article on Exploratory Data Analysis. Read the latest articles on our blog!

About the Author

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion. 


How Does Merger Arbitrage Work With Examples?

What is Merger Arbitrage

Start Your Free Investment Banking Course

Download Corporate Valuation, Investment Banking, Accounting, CFA Calculator & others

How does Merger Arbitrage work?

As has already been discussed above, investors use merger arbitrage to exploit the uncertainties surrounding the successful execution of a merger, especially during the period between the announcement of the acquisition and the formal completion of the same. For instance, let us assume that Company A is the acquirer, and Company B is the target in a merger transaction. On Jan 1, 2023, Company A announced that it would acquire Company B in the next six months at the offered price of $100 per share. On the announcement day, Company B’s share price jumped from a pre-announcement price of $70 to close at $85 per share.

The period between the announcement of the deal and its formal execution is critical for a merger arbitrage, which in this example is six months. This period includes many processes, such as shareholders’ approval for the deal, approval from the regulatory authorities, tracking of the target company’s performance, and a bunch of legal paperwork. The spread between $85 and $100 captures the perceived risk of the deal not going through as per plan. Now, as the day for the deal arrives, and if there is no negative news about the merger, the target company’s share price will continue to inch toward the target price of $100.

Examples of Merger Arbitrage

Some of the significant examples have been discussed below:

In June 2024, Microsoft Corp. announced that it would acquire LinkedIn Corporation as per a definitive agreement. It was an all-cash transaction worth $26.2 billion, under which Microsoft bought each LinkedIn share for $196. On the announcement day (June 13, 2024), LinkedIn stocks started trading at $131.08 per share to close at $192.21. The deal was completed in December 2024. If an investor had bought a LinkedIn share at $192.21 and waited for seven months, he would have made an annualized profit of 3.38% (= ($196 – $192.21) / $192.21 * 12 / 7). It is an example of merger arbitrage.

In October 2023, IBM and Red Hat entered into a definitive agreement under which IBM agreed to purchase the entire equity share of Red Hat at a target price of $190 per share in an all-cash merger. The transaction was one of the most significant tech acquisitions of the year, valued at approximately $34 billion. The pre-announcement price of $116.87 per share soared up to $$169.93 by the end of the announcement day. The deal was completed in July 2023. If an investor had bought Red Hat’s share at $169.93 and waited for eight months, he would have made an annualized profit of 17.7% (= ($190 – $169.93) / $169.93 * 12 / 8). It is another example of merger arbitrage.

Merger Arbitrage in Investment Strategy

Now, a merger arbitrageur has a strategy for both situations.

High Probability of Successful Closure: A merger arbitrageur will purchase the shares of the target company (trading at a lower price band) while shorting the acquiring company’s shares (trading at a higher price band). Now, after the successful closure of the deal, the target company’s share converts into the acquiring company’s shares. In this case, the investor will use the converted shares to cover its short position and, as a result, will sell the shares at a higher price.

Low Probability of Successful Closure: In this case, the investor will short-sell the target company’s stocks. When the merger fails, the target company’s share price will fall back to the pre-announcement level. The failure of the deal can be due to multiple reasons. However, the arbitrageur can profit by purchasing the company’s stocks at a lower price and covering its short position.


In most cases, merger arbitrage strategies focus on limiting downside risk and making informed decisions. As a result, these strategies are market neutral and can profit in any market situation.

These aggressive strategies can yield a high return in a brief period if appropriately executed.

At times, some investors use these strategies speculatively, which may surge the stock prices to levels that cannot be explained through fundamental analysis.

More considerable hedge funds deploy bulk transactions and use these strategies to influence the market.

Recommended Articles

This is a guide to Merger Arbitrage. Here we also discuss the introduction and how merger arbitrage work with different examples. You may also have a look at the following articles to learn more –

Update the detailed information about How Does An Accrued Liability Work With Example? on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!