Trending February 2024 # Triangle Patterns – Technical Analysis # Suggested March 2024 # Top 5 Popular

You are reading the article Triangle Patterns – Technical Analysis updated in February 2024 on the website Flu.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Triangle Patterns – Technical Analysis

Triangle Patterns – Technical Analysis

Technical analysis tools for recognizing emerging bullish or bearish market patterns

Written by

Tim Vipond

Published February 2, 2023

Updated July 7, 2023

Triangle Patterns

Triangle patterns are a commonly-used technical analysis tool. It is important for every trader to recognize patterns as they form in the market. Patterns are vital in a trader’s quest to spot trends and predict future outcomes so that they can trade more successfully and profitably. Triangle patterns are important because they help indicate the continuation of a bullish or bearish market. They can also assist a trader in spotting a market reversal.

There are three types of triangle patterns: ascending, descending, and symmetrical. The picture below depicts all three. As you read the breakdown for each pattern, you can use this picture as a point of reference, a helpful visualization tool you can use to get a mental picture of what each pattern might look like. And here is the short version of triangle patterns:

Ascending triangles are a bullish formation that anticipates an upside breakout.

Descending triangles are a bearish formation that anticipates a downside breakout.

Symmetrical triangles, where price action grows increasingly narrow, may be followed by a breakout to either side—up or down.

Ascending Triangle Patterns 

Ascending triangle patterns are bullish, meaning that they indicate that a security’s price is likely to climb higher as the pattern completes itself. This pattern is created with two trendlines. The first trendline is flat along the top of the triangle and acts as a resistance point which—after price successfully breaks above it—signals the resumption or beginning of an uptrend. The second trendline—the bottom line of the triangle that shows price support—is a line of ascension formed by a series of higher lows. It is this configuration formed by higher lows that forms the triangle and gives it a bullish characterization. The basic interpretation is that the pattern reveals that each time sellers attempt to push prices lower, they are increasingly less successful.

Eventually, price breaks through the upside resistance and continues in an uptrend. In many cases, the price is already in an overall uptrend and the ascending triangle pattern is viewed as a consolidation and continuation pattern. In the event that an ascending triangle pattern forms during an overall downtrend in the market, it is typically seen as a possible indication of an impending market reversal to the upside.

Indications and Using the Ascending Triangle Pattern 

Because the ascending triangle is a bullish pattern, it’s important to pay close attention to the supporting ascension line because it indicates that bears are gradually exiting the market. Bulls (or buyers) are then capable of pushing security prices past the resistance level indicated by the flat top line of the triangle.

As a trader, it’s wise to be cautious about making trade entries before prices break above the resistance line because the pattern may fail to fully form or be violated by a move to the downside. There is less risk involved by waiting for the confirming breakout. Buyers can then reasonably place stop-loss orders below the low of the triangle pattern.

Using Descending Triangle Patterns 

Based on its name, it should come as no surprise that a descending triangle pattern is the exact opposite of the pattern we’ve just discussed. This triangle pattern offers traders a bearish signal, indicating that the price will continue to lower as the pattern completes itself. Again, two trendlines form the pattern, but in this case, the supporting bottom line is flat, while the top resistance line slopes downward.

Just as an ascending triangle is often a continuation pattern that forms in an overall uptrend, likewise a descending triangle is a common continuation pattern that forms in a downtrend. If it appears during a long-term uptrend, it is usually taken as a signal of a possible market reversal and trend change. This pattern develops when a security’s price falls but then bounces off the supporting line and rises. However, each attempt to push prices higher is less successful than the one before, and eventually, sellers take control of the market and push prices below the supporting bottom line of the triangle. This action confirms the descending triangle pattern’s indication that prices are headed lower. Traders can sell short at the time of the downside breakout, with a stop-loss order placed a bit above the highest price reached during the formation of the triangle.

Using Symmetrical Triangle Patterns 

Traders and market analysts commonly view symmetrical triangles as consolidation patterns which may forecast either the continuation of the existing trend or a trend reversal. This triangle pattern is formed as gradually ascending support lines and descending resistance lines meet up as a security’s trading range becomes increasingly smaller. Typically, a security’s price will bounce back and forth between the two trendlines, moving toward the apex of the triangle, eventually breaking out in one direction or the other and forming a sustained trend.

If a symmetrical triangle follows a bullish trend, watch carefully for a breakout below the ascending support line, which would indicate a market reversal to a downtrend. Conversely, a symmetrical triangle following a sustained bearish trend should be monitored for an upside breakout indication of a bullish market reversal.

Regardless of whether a symmetrical triangle breakout goes in the direction of continuing the existing trend or in the direction of a trend reversal, the momentum that is generated when price breaks out of the triangle is usually sufficient to propel the market price a significant distance. Thus, the breakout from a symmetrical triangle is usually considered a strong signal of future trend direction which traders can follow with some confidence. Again, the triangle formation offers easy identification of reasonable stop-loss order levels—below the low of the triangle when buying, or above the triangle high if selling short.

The Bottom Line 

In the end, as with any technical indicator, successfully using triangle patterns really comes down to patience and due diligence. While these three triangle patterns tend toward certain signals and indications, it’s important to stay vigilant and remember that the market is not known for being predictable and can change directions quickly. This is why judicious traders eyeing what looks like a triangle pattern shaping up will wait for the breakout confirmation by price action before adopting a new position in the market.

Additional Resources

You're reading Triangle Patterns – Technical Analysis

Hyper Patterns – Studio Arian Hakimi

The studio aims to redefine our basic understanding of patterns, re-discovering possibilities and expanding the rational capacity of complex geometries for the next generation of hyper-building clusters in a multi-scenario multi-objective environment.

Hyper Patterns studio

The Hyper Patterns design studio offers a fusion of academic knowledge and professional experience. Each applicant’s contribution to the final project will vary to reflect their own individual background and interests. Due to the intense nature and task deliverables, participants will work in groups. Each design phase will manifest itself in the meticulous production of digital model series of matrix-based catalogs. As for the final project, the design studies will be materialized in tower typology research.

Our future is set to be urban. As we move forward in the 21st century, the global population will likely continue growing. Today, more than half of the population [4.4 billion inhabitants] live in urban areas, and 1.5 million people are added to the global urban population every week. “70% of the world population projected to live in urban areas by 2050” UN Department of economic and social affairs. The next golden age of skyscrapers is upon us; between 1924 and 1934, 49 buildings over 150m in height were completed, all in the US. Between 2006 and 2024, there will be a total of 2296, with new towers on every continent.

Design Prologue

Pattern, by definition, means series/sequences that repeat. It can be represented as visible regularities in nature within a form of geometric shapes such as symmetry, fractals, spirals, bubbles, tessellations and …

From galaxy formation to an atomic orbital, almost every phenomenon in our cosmos is governed by rules that can be described as patterns in physical form. For over millennia, mankind has been attempting to explain the order in the universe, using mathematics seeking to discover and explain abstract behaviors in nature. This year, the Hyper Patterns program aims to re-define our basic understanding of patterns, re-discovering possibilities and expanding the rational capacity of complex geometries for the next generation of hyper building clusters in a multi-scenario multi-objective environment.

Design methodology

The form-finding mechanism is based on a rigorous and morphological process (to describe a scientific approach) constituted by 4 interconnected phases of exploration:

Component Study[CS]

Each group will select from a range of pre-rationalized components corresponding to the final project scenario. They will dichotomize by the inherent spatial characteristics of the component. Through sets of experiments by manipulating and variating each of the defining geometrical parameters, students become familiar with the geometrical logic and document their findings in a matrix-based catalog.

Objectives:

Humanizing architecture by reintroducing the human body as a measure of dimension and proportion

Define a set of individual units that could be expressed within larger interconnected parts

The geometric relationship between dimensions

Growth System[GS]

Recognizing and understanding the relationship between components within a controlled aggregation system, investigating the most compact and efficient assembly of components that can lead to a conceptual framework [formal-organizational possibilities Subsystem]

Objectives

Develop a problem-solving pattern cluster language (formal-organizational possibilities) (Subsystem correlation)

Experiments in optimizing the use of space in 2d and 3d

Program

Each group will set out their experimental design strategy based on their self-defined program brief corresponding to the selected scenario. This process also includes the narrative, a source of inspiration that can justify the objective of the final project.

Design Development/Documentation

Consolidating the design repertoire into formal strategy and methodology concerning site/brief condition without losing its structural unity secures an intrinsic, three-dimensional expression in spatial form or building mass that would reflect spatial design organization.

Software:

Rhino3D

Grasshopper3d

Adobe Illustrator

Adobe InDesign

Important Notes:

The Hyper Patterns Studio workshop by PAACADEMY will start on Friday, January 27, at 09:00 (London Time).

Total sessions: 7 (teaching sessions) + 2 (projects review) + 1 (final presentation)

Schedule on each Friday: 09:00 – 18:00 (London Time).

The teaching duration per week will be around 5-7 hours only.

Students will have time for a break between the teaching hours and be given time to work on their projects during the session.

Each session and the entire studio will be recorded, and videos will be available for participants just a day after the class for unlimited time.

Certificate of attendance will be provided by PAACADEMY only for students who deliver a final project.

No previous knowledge of any software is required. You will learn everything in the workshop.

The studio has limited seats. Tickets are non-transferable & non-refundable. Please read carefully before you register.

Instructors: Arian Hakimi

Arian Hakimi is the founder of Arian Hakimi Architects, a multidisciplinary studio for Architecture, Urbanism, and design, rooted in systematic design strategies. His academic background starts in India, where he attended the University of Pune and later transferred to the University of Tehran to finish his chúng tôi focusing on the lost arts of Iran. He then studied chúng tôi at IaaC-UPC, focusing on Urban Design & computational-morphogenesis design methodology.

He has been lecturing on the field of Parametricism and systematic design methods. He has tutored over 17 international workshops promoting bottom-up design methodology to unfold new territories in the design field. Including in Zaha Hadid Architects, the internship summer program resulted in an exhibition of the extraordinary process,” the late Zaha Hadid’s final show at Maison Mais Non-London.”

Hoda Eskandarnia

Hoda Eskandarnia is an architect and an interdisciplinary designer working at the intersection of science and technology. She is a founder and director of Nia Design Studio, an innovative architecture and design practice based in Iran that works on the nexus of design and emergent technologies. Hoda completed her master’s degree with distinction in MArch-Architectural design from the Bartlett School of Architecture, UCL, where she received a B-Pro Gold Prize for design excellence. Her work has been exhibited worldwide in venues such as biofabricate (NewYork), TAB-bioTallinn Architecture Biennale (Estonia), and The Royal Society (London). Hoda has presented her design research as a published paper by Springer at Human-Computer Interaction (HCI) International Conference (Denmark). Also, she received the first prize in the poster competition at the 3rd IEEE UK & Ireland Robotics Automation Society Conference (London).

Nariman Nejati

Nariman Nejati is an architect and multidisciplinary designer. He has been involved in various projects from competition to construction at [DDA/DYBAN 2013-2024]. He is currently a general manager at Arian Hakimi Architects [AHA] [AHD], working on various design disciplines, from fashion design to architectural buildings.

Repeating Patterns In Photoshop – Adding Colors And Gradients

Repeating Patterns In Photoshop – Adding Colors And Gradients

Written by Steve Patterson.

In the previous tutorial, we learned the basics of creating and using simple repeating patterns in Photoshop. We designed a single tile using the Elliptical Marquee Tool and the Offset filter. We then saved the tile as a pattern. Finally, we selected the pattern and used it to fill a layer, with the pattern seamlessly repeating as many times as needed to cover the entire area. This tutorial continues from where we left off, so you may want to complete the previous section where we created and added our “Circles” pattern if you haven’t done so already.

The main problem with the repeating pattern we’ve created so far is that it’s not very interesting, and a big reason is that it’s nothing more than a black pattern in front of a white background. In this tutorial, we’ll learn how to spice things up a bit by adding colors and gradients! As before, I’ll be using Photoshop CS5 here, but any recent version of Photoshop will work.

Here’s our design as it appears so far:

Adding Solid Colors

Select Solid Color from the top of the list of fill and adjustment layers that appears:

As soon as you choose Solid Color from the list, Photoshop will pop open the Color Picker so we can choose the color we want to use. This is the color that will become the new background color for the design. I’m going to choose a medium blue. Of course, you can choose any color you like, but if you want to use the same colors I’m using, look for the R, G and B options (which stand for Red, Green and Blue) near the bottom center of the Color Picker and enter 98 for the R value, 175 for G, and 200 for B:

If we look in the Layers panel, we can see what’s happened. Photoshop has added a solid color fill layer, which it named Color Fill 1, between the white-filled Background layer and the black circle pattern on Layer 1. The reason we selected the Background layer before adding the fill layer was because Photoshop adds new layers directly above the layer that’s currently selected and we needed the fill layer to appear above the Background layer but below the circle pattern. The circles remain black in our document because they’re on a layer above the fill layer, which means they’re not being affected by it:

Wait a minute, what happened? Where did our circles go? Where’s the background color we just added? Why is everything now light blue? If we look in the Layers panel, we see the problem, and the problem is that Photoshop did exactly what we asked it to do. It added a solid color fill layer named Color Fill 2, filled with the light blue color we chose in the Color Picker, above the circles pattern on Layer 1:

Unfortunately, since the fill layer is sitting above all the other layers in the Layers panel, it’s blocking everything else from view in the document, which is why all we see is light blue. We need a way to tell Photoshop that we want our new fill layer to affect only the circles pattern on Layer 1 below it, and we can do that using what’s called a clipping mask.

The Color Fill 2 layer will appear indented to the right in the Layers panel, telling us that it’s now “clipped” to the contents of the layer below it, meaning that it’s now affecting only the circle pattern on Layer 1:

And in the document window, we see the results we were expecting when we added the fill layer. The black circles now appear light blue against the darker blue background:

Changing Colors

This re-opens the Color Picker, allowing us to choose a different color. I’ll choose a cherry color this time by entering 204 for my R value, 32 for G and 130 for B:

This again re-opens the Color Picker so we can choose a new color. I’ll choose a lighter pink by entering 218 for my R value, 144 for G and 161 for B:

Adding Gradients To Repeating Patterns

With the fill layers gone, the pattern reverts back to its original black and white:

Choose a Gradient fill layer from the list that appears:

In the document window, we can see what the Spectrum gradient will look like. Notice that only the circles themselves are being affected by the gradient thanks to that Use Previous Layer to Create Clipping Mask option we selected a moment ago in the New Layer dialog box:

Once you’ve chosen a gradient set, Photoshop will ask if you want to replace the current gradients with the new set or if you just want to append them, which will keep the current gradients and add the new ones to them. Choose Append:

The circle pattern is now colorized with the softer colors of the new gradient:

Changing The Gradient

I’ll add these new gradients in with the others by selecting Append when Photoshop asks me, and the new gradient thumbnails appear in the Presets area of the Gradient Editor. I’ll select the Green, Purple, Blue gradient this time:

Of course, we don’t have to stick with a white background. Here, I’ve used the steps we covered in the first part of the tutorial to add a Solid Color fill layer above the Background layer. I chose a medium purple from the Color Picker as the new color for my background (R:85, G:80, B:129):

And here, we see the combined efforts of the Gradient fill layer on the circle pattern and the Solid Color fill layer on the background:

Where to go next…

And there we have it! That’s how easy it is to colorize repeating patterns with colors and gradients! Up next, we’ll look at how to create fun and interesting repeating patterns with Photoshop’s custom shapes! Or visit our Photoshop Basics section to learn more about Photoshop!

Data Analytics Vs Data Analysis

Differences Between Data Analytics vs Data Analysis

Data analysis involves investigating, cleaning, transforming, and training the data to find helpful information, recommend conclusions, and help decision-making. Data analysis tools are Open Refine, Tableau Public, KNIME, Google Fusion Tables, Node XL, and many more. Analytics utilizes data, machine learning, statistical analysis, and computer-based models to get better insight and make better decisions from the data. Analytics is “transforming data into actions through analysis and insight in the context of organizational decision-making and problem-solving.” Analytics is supported by many tools such as Microsoft Excel, SAS, R, Python(libraries), tableau public, Apache Spark, and Excel.

Head to Head Comparison Between Data Analytics and Data Analysis

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

Key Differences Between Data Analytics and Data Analysis

Below are the lists of points that describe the critical Differences Between Data Analytics and Data Analysis:

Data analytics serves as a conventional and versatile approach utilized in various sectors including healthcare, business, telecommunications, and insurance, to extract insights from data and inform decision-making processes.

Data analytics consists of data collection and, in general, inspecting the data and whether it has one or more uses. In contrast, Data analysis consists of defining, investigating, and cleaning the data by removing Na values or any outlier present, transforming the data to produce a meaningful outcome.

To perform data analytics, one has to learn many tools to accomplish necessary actions on data. One must know R, Python, SAS, Tableau Public, Apache Spark, Excel, and many more to achieve analytics. For data analysis, one must have hands-on tools like Open Refine, KNIME, Rapid Miner, Google Fusion Tables, Tableau Public, Node XL, Wolfram Alpha tools, etc.

The data analytics life cycle consists of Business Case Evaluation, Data Identification, Data Acquisition & Filtering, Data Extraction, Data Validation & Cleansing, Data Aggregation & Representation, Data Analysis, Data Visualization, and Utilization of Analysis Results; as we know that data analysis is a sub-component of data analytics. Hence, the data analysis life cycle also comes into the analytics part; it consists of data gathering, scrubbing, data analysis, and interpreting the data precisely so that you can understand what your data want to say.

Whenever someone wants to find out what will happen next or what will be next, we go with data analytics because data analytics helps predict future value. Data analysis performs on the past dataset to understand what has happened so far from the data. Data analytics and data analysis are both necessary to understand the data. One can help estimate future demands, and the other is important for researching data to look into the past.

Data Analytics vs Data Analysis Comparison Table

Basis of Comparison

               Data Analytics

Form Data analytics is ‘general’ form of analytics that is used in businesses to make decisions from data-driven data. Data analysis is a specialized form of data analytics used in businesses to analyze data and take some insights into it.

Structure

Data analytics consists of data collection and inspection in general, with one or more users. Data analysis consisted of defining data, investigating, cleaning, and transforming the data to give a meaningful outcome.

Tools Many analytics tools are in the market, but mainly R, Tableau Public, Python, SAS, Apache Spark, and Excel are used. For data analysis, professionals utilize tools such as OpenRefine, KNIME, RapidMiner, Google Fusion Tables, Tableau Public, NodeXL, and WolframAlpha.

Sequence Data analytics life cycle consists of Business Case Evaluation, Data Identification, Data Acquisition & Filtering, Data Extraction, Data Validation & Cleansing, Data Aggregation & Representation, Data Analysis, Data Visualization, and Utilization of Analysis Results. The sequence followed in data analysis is data gathering, data scrubbing, analysis of data, and interpreting the data precisely so that you can understand what your data want to say.

Usage In general, data analytics enables organizations to find masked patterns, identify anonymous correlations, understand customer preferences, analyze market trends, and extract other necessary information that helps make more informed decisions for business purposes. Data analysis can be used in various ways. One can perform analyses like descriptive, exploratory, inferential, and predictive analyses and take useful insights from the data.

Example Let’s say you have 1gb customer purchase-related data for the past 1 year; now, one has to find what our customer’s next possible purchases are; you will use data analytics for that. Suppose you have 1gb customer purchase-related data from the past 1 year and are trying to find what happened so far. That means in data analysis, we look into the past.

Conclusion

Today, organizations are experiencing a rapid increase in data usage, with a vast amount of data being collected across various sources such as customers, business processes, application users, website visitors, and stakeholders. This data is processed and analyzed to uncover patterns, gain insights, and make informed decisions. Data analytics encompasses a range of tools and techniques, both qualitative and quantitative, that leverage this collected data to generate valuable outcomes.

Recommended Articles

We hope that this EDUCBA information on “Data Analytics vs Data Analysis” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

Customer Segmentation Using Rfm Analysis

This article was published as a part of the Data Science Blogathon

Before starting, let’s see what is RFM and why is it important.

Introduction: What is RFM?

RFM is a method used to analyze customer value. RFM stands for RECENCY, Frequency, and Monetary.

RECENCY: How recently did the customer visit our website or how recently did a customer purchase?

Frequency: How often do they visit or how often do they purchase?

Monetary: How much revenue we get from their visit or how much do they spend when they purchase?

For example, if we see the sales data in the last 12 months, the RFM will look something like below

Why is it needed?

RFM Analysis is a marketing framework that is used to understand and analyze customer behaviour based on the above three factors RECENCY, Frequency, and Monetary.

The RFM Analysis will help the businesses to segment their customer base into different homogenous groups so that they can engage with each group with different targeted marketing strategies.

RFM on Adventure works database:

Now, let’s start the real part. For this, I chose the Adventure works database that is available publicly

Adventure Works Cycles a multinational manufacturing company. The company manufactures and sells metal and composite bicycles to North American, European, and Asian commercial markets.

The database contains many details. But, I am just concentrating on the Sales details to get RFM values and segment the customers based on RFM values.

5NF Star Schema:

We have to identify the dimension tables and fact tables from the database based on our requirements.

I have prepared 5NF Star Schema (Fact, Customer, Product, Date, Location) from the database imported

Join the tables :

From the above tables, we can write an SQL query to Join all the tables and get the necessary data.

   SELECT pc.[EnglishProductCategoryName] ,Coalesce(p.[ModelName], p.[EnglishProductName]) ,CASE WHEN Month(GetDate()) < Month(c.[BirthDate]) THEN DateDiff(yy,c.[BirthDate],GetDate()) - 1 WHEN Month(GetDate()) = Month(c.[BirthDate]) AND Day(GetDate()) < Day(c.[BirthDate]) THEN DateDiff(yy,c.[BirthDate],GetDate()) - 1 ELSE DateDiff(yy,c.[BirthDate],GetDate()) END ,CASE WHEN c.[YearlyIncome] < 40000 THEN 'Low' ELSE 'Moderate' END ,d.[CalendarYear] ,f.[OrderDate] ,f.[SalesOrderNumber] ,f.SalesOrderLineNumber ,f.OrderQuantity ,f.ExtendedAmount FROM [dbo].[FactInternetSales] f, [dbo].[DimDate] d, [dbo].[DimProduct] p, [dbo].[DimProductSubcategory] psc, [dbo].[DimProductCategory] pc, [dbo].[DimCustomer] c, [dbo].[DimGeography] g, [dbo].[DimSalesTerritory] s where f.[OrderDateKey] = d.[DateKey] and f.[ProductKey] = p.[ProductKey] and p.[ProductSubcategoryKey] = psc.[ProductSubcategoryKey] and psc.[ProductCategoryKey] = pc.[ProductCategoryKey] and f.[CustomerKey] = c.[CustomerKey] and c.[GeographyKey] = g.[GeographyKey] and g.[SalesTerritoryKey] = s.[SalesTerritoryKey] order by c.CustomerKey

Pull the table to an excel sheet or CSV file. Bingo. Now you have the data to do RFM Analysis in python.

That’s all about SQL. 🙂

Calculating R, F, and M values in Python:

From the sales data we have, we calculate RFM values in Python and Analyze the customer behaviour and segment the customers based on RFM values.

I will be doing the analysis in the Jupyter notebook.

Read the data

aw_df = pd.read_excel('Adventure_Works_DB_2013.xlsx') aw_df.head()

It should look something like below.

CustomerKey EnglishProductCategoryName Model Country Region Age IncomeGroup CalendarYear OrderDate OrderNumber LineNumber Quantity Amount

11000 Bikes Mountain-200 Australia Pacific 49 High 2013 18-01-2013 SO51522 1 1 2319.99

11000 Accessories Fender Set – Mountain Australia Pacific 49 High 2013 18-01-2013 SO51522 2 1 21.98

11000 Bikes Touring-1000 Australia Pacific 49 High 2013 03-05-2013 SO57418 1 1 2384.07

11000 Accessories Touring Tire Australia Pacific 49 High 2013 03-05-2013 SO57418 2 1 28.99

11000 Accessories Touring Tire Tube Australia Pacific 49 High 2013 03-05-2013 SO57418 3 1 4.99

Check for Null Values or missing values:

aw_df.isnull().sum()

Exploratory Data Analysis:

Once you are good with the data, we are good to start doing Exploratory Data Analysis aka. EDA

Now, let’s check how much are sales happened for each product category and how many quantities each category is being sold.

we will check them using barplot.

product_df = aw_df[['EnglishProductCategoryName','Amount']] product_df1 = aw_df[['EnglishProductCategoryName','Quantity']] product_df.groupby("EnglishProductCategoryName").sum().plot(kind="bar",ax=axarr[0]) product_df1.groupby("EnglishProductCategoryName").sum().plot(kind="bar",ax=axarr[1])

We can see, Bikes account for huge revenue generation even though accessories are being sold in high quantity. This might be because the cost of Bikes will be higher than the cost of Accessories.

Similarly, we can check which region has a higher customer base.

fig, axarr = plt.subplots(1, 2,figsize = (15,6)) Customer_Country = aw_df1.groupby('Country')['CustomerKey'].nunique().sort_values(ascending=False).reset_index().head(11) sns.barplot(data=Customer_Country,x='Country',y='CustomerKey',palette='Blues',orient=True,ax=axarr[0]) Customer_Region = aw_df1.groupby('Region')['CustomerKey'].nunique().sort_values(ascending=False).reset_index().head(11) sns.barplot(data=Customer_Region,x='Region',y='CustomerKey',palette='Blues',orient=True,ax=axarr[1]) Calculate R, F, and M values:

Recency

The reference date we have is 2013-12-31.

df_recency = aw_df1 df_recency = df_recency.groupby(by='CustomerKey',as_index=False)['OrderDate'].max() df_recency.columns = ['CustomerKey','max_date']

The difference between the reference date and maximum date in the dataframe for each customer(which is the recent visit) is Recency 

df_recency['Recency'] = df_recency['max_date'].apply(lambda row: (reference_date - row).days) df_recency.drop('max_date',inplace=True,axis=1) df_recency[['CustomerKey','Recency']].head()

We get the Recency values now.

CustomerKey Recency

0 11000 212

1 11001 319

2 11002 281

3 11003 205

Recency plot

plt.figure(figsize=(8,5)) sns.distplot(df_recency.Recency,bins=8,kde=False,rug=True)

We can see the customers who come within last 2 months are more and there are some customers that didn’t order more than a year. This way we can identify the customer and target them differently. But it is too early to say with only Recency value.

Frequency:

We can get the Frequency of the customer by summing up the number of orders.

df_frequency = aw_df1 #df_frequency = df_frequency.groupby(by='CustomerKey',as_index=False)['OrderNumber'].nunique() df_frequency.columns = ['CustomerKey','Frequency'] df_frequency.head()

They should look something like below

CustomerKey Frequency

11000 5

11001 6

11002 2

11003 4

11004 3

Frequency plot

plt.figure(figsize=(8,5)) sns.distplot(df_frequency,bins=8,kde=False,rug=True)

We can see the customers who order 2 times are more and then we see who orders 3 times. But there is very less number of customers that orders more than 5 times.

Now, it’s time for our last value which is Monetary.

Monetary can be calculated as the sum of the Amount of all orders by each customer.

df_monetary = aw_df1 df_monetary = df_monetary.groupby(by='CustomerKey',as_index=False)['Amount'].sum() df_monetary.columns = ['CustomerKey','Monetary'] df_monetary.head()

Customer Key Monetary

0 11000 4849

1 11001 2419.93

2 11002 2419.06

3 11003 4739.3

4 11004 4796.02

Monetary Plot

plt.figure(figsize=(8,5)) sns.distplot(df_monetary.Monetary,kde=False,rug=True)

We can clearly see, the customers spend is mostly less than 200$. This might be because they are buying more accessories. This is common since we buy Bikes once or twice a year but we buy accessories more.

We cannot come to any conclusion based on taking only Recency or Frequency or Monetary values independently. we have to take all 3 factors.

Let’s merge the Recency, Frequency, and Monetary values and create a new dataframe

r_f_m = r_f.merge(df_monetary,on='CustomerKey')

CustomerKey Recency LineNumber Monetary

0 11000 212 5 4849

1 11001 319 6 2419.93

2 11002 281 2 2419.06

3 11003 205 4 4739.3

4 11004 214 3 4796.02

Scatter Plot:

When we have more than two variables, we choose a scatter plot to analyze.

Recency Vs frequency

plt.scatter(r_f_m.groupby('CustomerKey')['Recency'].sum(), aw_df1.groupby('CustomerKey')['Quantity'].sum(), color = 'red', marker = '*', alpha = 0.3) plt.title('Scatter Plot for Recency and Frequency') plt.xlabel('Recency') plt.ylabel('Frequency')

We can see the customers whose Recency is less than a month have high Frequency i.e the customers buying more when their recency is less.

Frequency Vs Monetary

market_data = aw_df.groupby('CustomerKey')[['Quantity', 'Amount']].sum() plt.scatter(market_data['Amount'], market_data['Quantity'], color = 'red', marker = '*', alpha = 0.3) plt.title('Scatter Plot for Monetary and Frequency') plt.xlabel('Monetary') plt.ylabel('Frequency')

We can see, customers buying frequently are spending less amount. This might be because we frequently buy Accessories which are less costly.

Recency Vs Frequency Vs Monetary

Monetary = aw_df1.groupby('CustomerKey')['Amount'].sum() plt.scatter(r_f_m.groupby('CustomerKey')['Recency'].sum(), aw_df1.groupby('CustomerKey')['Quantity'].sum(), marker = '*', alpha = 0.3,c=Monetary) plt.title('Scatter Plot for Recency and Frequency') plt.xlabel('Recency') plt.ylabel('Frequency')

Now, in the above plot, the color specifies Monetary. From the above plot, we can say the customers whose Recency is less have high Frequency but less Monetary.

This might vary from case to case and company to company. That is why we need to take all the 3 factors into consideration to identify customer behavior.

How do we Segment:

We can bucket the customers based on the above 3 Factors(RFM). like, put all the customers whose Recency is less than 60 days in 1 bucket. Similarly, customers whose Recency is greater than 60 days and less than 120 days in another bucket. we will apply the same concept for Frequency and Monetary also.

Depending on the Company’s objectives, Customers can be segmented in several ways. so that it is financially possible to make marketing campaigns.

The ideal customers for e-commerce companies are generally the most recent ones compared to the date of study(our reference date) who are very frequent and who spend enough.

Based on the RFM Values, I have assigned a score to each customer between 1 and 3(bucketing them). 3 is the best score and 1 is the worst score.

Ex: A

Customer who bought most recently and most often, and spent the most,

his RFM score is 3–3–3

To achieve this, we can write a simple code in python as below

Bucketing Recency:

def R_Score(x): if x['Recency'] 60 and x['Recency'] <=120: recency = 2 else: recency = 1 return recency r_f_m['R'] = r_f_m.apply(R_Score,axis=1)

Bucketing Frequency

def F_Score(x): if x['LineNumber'] 3 and x['LineNumber'] <=6: recency = 2 else: recency = 1 return recency r_f_m['F'] = r_f_m.apply(F_Score,axis=1)

Bucketing Monetary

M_Score = pd.qcut(r_f_m['Monetary'],q=3,labels=range(1,4))

r_f_m = r_f_m.assign(M = M_Score.values)

Once we bucket all of them, our dataframe looks like below

CustomerKey Recency LineNumber Monetary R F M

0 11000 212 5 4849 1 2 3

1 11001 319 6 2419.93 1 2 3

2 11002 281 2 2419.06 1 3 3

3 11003 205 4 4739.3 1 2 3

4 11004 214 3 4796.02 1 3 3

R-F-M Score

Now, let’s find the R-F-M Score for each customer by combining each factor.

def RFM_Score(x): return str(x['R']) + str(x['F']) + str(x['M']) r_f_m['RFM_Score'] = r_f_m.apply(RFM_Score,axis=1)

CustomerKey Recency LineNumber Monetary R F M RFM_Score

0 11000 212 5 4849 1 2 3 123

1 11001 319 6 2419.93 1 2 3 123

2 11002 281 2 2419.06 1 3 3 133

3 11003 205 4 4739.3 1 2 3 123

4 11004 214 3 4796.02 1 3 3 133

Now, we have to identify some key segments.

If the R-F-M score of any customer is 3-3-3. His Recency is good, frequency is more and Monetary is more. So, he is a Big spender. 

Similarly, if his score is 2-3-3, then his Recency is better and frequency and monetary are good. This customer hasn’t purchased for some time but he buys frequently and spends more.

we can have something like the below for all different segments

Now, we just have to do this in python. don’t worry, we can do it pretty easily as below.

segment = [0]*len(r_f_m) best = list(r_f_m.loc[r_f_m['RFM_Score']=='333'].index) lost_cheap = list(r_f_m.loc[r_f_m['RFM_Score']=='111'].index) lost = list(r_f_m.loc[r_f_m['RFM_Score']=='133'].index) lost_almost = list(r_f_m.loc[r_f_m['RFM_Score']=='233'].index) for i in range(0,len(r_f_m)): if r_f_m['RFM_Score'][i]=='111': segment[i]='Lost Cheap Customers' elif r_f_m['RFM_Score'][i]=='133': segment[i]='Lost Customers' elif r_f_m['RFM_Score'][i]=='233': segment[i]='Almost Lost Customers' elif r_f_m['RFM_Score'][i]=='333': segment[i]='Best Customers' else: segment[i]='Others' r_f_m['segment'] = segment

CustomerKey Recency LineNumber Monetary R F M RFM_Score segment

0 11000 212 5 4849 1 2 3 123 Spenders

1 11001 319 6 2419.93 1 2 3 123 Spenders

2 11002 281 2 2419.06 1 3 3 133 Customers

3 11003 205 4 4739.3 1 2 3 123 Spenders

4 11004 214 3 4796.02 1 3 3 133 Customers

5 11005 213 4 4746.34 1 2 3 123 Spenders

Now, lest plot a bar plot to identify the customer base for each segment.

Recommendations:

Based on the above R-F-M score, we can give some Recommendations.

Best Customers: We can Reward them for their multiples purchases. They can be early adopters to very new products. Suggest them “Refer a friend”. Also, they can be the most loyal customers that have the habit to order.

Lost Cheap Customers: Send them personalized emails/messages/notifications to encourage them to order.

Big Spenders: Notify them about the discounts to keep them spending more and more money on your products

Loyal Customers: Create loyalty cards in which they can gain points each time of purchasing and these points could transfer into a discount

This is how we can target a customer based on the customer segmentation which will help in marketing campaigns. Thus saving marketing costs, grab the customer, make customers spend more thereby increasing the revenue.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

Swift Program To Print Left Triangle Pattern Of Numbers

This tutorial will discuss how to write swift program to print left triangle pattern of numbers.

Numeric pattern is a sequence of numbers which is used to develop different patterns or shapes like pyramid, rectangle, cross, etc. These numeric patterns are generally used to understand or practice the program flow controls, also they are good for logical thinking.

To create a left triangle pattern of numbers, we can use any of the following methods −

Using nested for loop

Using init() Function

Using stride Function

Below is a demonstration of the same −

Input

Suppose our given input is −

Num = 10

Output

The desired output would be −

1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 10 Method 1 – Using Nested For Loop

We can create a left triangle star pattern or any other pattern using nested for loops.

Example

The following program shows how to print a left triangle pattern of numbers using nested for loop.

import

Glibc

let

num

=

9

for

x

in

1.

.

.

num

{

for

y

in

1.

.

.

x

{

print

(

y

,

terminator

:

” “

)

}

print

(

” “

)

}

Output 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9

Here, in the above code, we uses nested for loops to print left triangle pattern of numbers. The outer most for loop(starts from 1 to 9) is use to handle the total number of rows are going to print and each row is start with new line. Now the nested for loop(starts from 1 to x) is used to print numbers from 1 to 9 in triangle pattern or we can say is used to handle the total number of columns in the pattern.

Method 2 – Using init() Function

Swift provide an in-built function named String.init(). Using this function, we can able to create any pattern. String.init() function create a string in which the given character is repeated the specified number of times.

Syntax

Following is the syntax −

String.init(repeating:Character, count: Int)

Here, repeating represent the character which this method repeats and count represent the total number of time the given character repeat in the resultant string.

Example

The following program shows how to print left triangle pattern of numbers using string.init() function.

import

Glibc

let

num

=

4

for

i

in

1.

.

.

num

{

print

(

String

.

init

(

repeating

:

“123”

,

count

:

i

)

)

}

Output 123 123123 123123123 123123123123

Here in the above code, we create a left triangle pattern of numeric string = “123” of height 4 using String.init() function. Here we uses for loop(starting from 1 to num) which is used to print each row. In this loop, we uses String.init() function. This function prints “123” according to the count value(that is i) −

print(String.init(repeating:"123", count:i))

So the working of the above code is −

num = 4

In 1st iteration: i = 1

print(String.init(repeating: "123”, count: 1))

So it print one times “123”

print(String.init(repeating: "123”, count: 2))

So it print two times “123”

So on till 4th iteration and print left triangle pattern of numbers.

Method 3 – Using stride Function

Swift provide an in-built function named stride(). The stride() function is used to move from one value to another with increment or decrement. Or we can say stride() function return a sequence from the starting value but not include end value and each value in the given sequence is steps by the given amount.

Syntax

Following is the syntax −

stride(from:startValue, to: endValue, by:count)

Here,

from − Represent the starting value to used for the given sequence.

to − Represent the end value to limit the given sequence

by − Represent the amount to step by with each iteration, here positive value represent upward iteration or increment and negative value represent the downward iteration or decrement.

Example

The following program shows how to print left triangle pattern of numbers using stride() function.

import

Glibc

let

num

=

13

for

i

in

1.

.

.

num

{

for

j

in

stride

(

from

:

1

,

to

:

i

,

by

:

1

)

{

print

(

j

,

terminator

:

” “

)

}

print

(

“”

)

}

Output 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 12

Here in the above code, we uses nested for loops along with stride() function. The outermost for loop(start from 1 to num) is used to handle the total number of rows are going to print and each row starts with a new line. And the nested for loop is used to print left triangle pattern of numbers using stride() function −

for _ in stride(from: 1, to: i, by: 1) { print("*", terminator:" ") }

Here the iteration starts from 1 to i and each iteration is increased by one and print numbers from 1 to 12 in left triangle pattern.

Update the detailed information about Triangle Patterns – Technical Analysis on the Flu.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!