You are reading the article Understanding Bigquery: Architecture And Use Case updated in February 2024 on the website Flu.edu.vn. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested March 2024 Understanding Bigquery: Architecture And Use Case
This article was published as a part of the Data Science Blogathon.Introduction
Arushi is a data architect in a company named Redeem. The company provides cashback to customers who check in at restaurants & hotels. Customers log in through the app and upload the bills and they got a certain percentage of the total bill amount as cashback. The company is currently using an SQL server to manage the customer database. The user base of the company is increasing very rapidly. In order to meet the accelerated growth of the user base, the company plans to switch to the cloud-based serverless architecture. They got GCP-BigQuery as the solution. It is a cloud-based serverless architecture that is fully managed and can be scaled up and scaled down as per the load.BigQuery Ensures Data-Driven Decision Making
BigQuery is the process of making decisions based on the data and not on intuition. Human intuition or observation tends to be biased and provides false conclusions sometimes. Data-Driven decision-making ensures that it is the data from which the conclusion is to be made as it results in error-free judgment. In order to conduct the process, we must follow these steps:1. Business Problem Formulation
It is necessary that you identify the correct business questions before going down the journey. It acts as a priority and provides clarity to the goals. It can be formulated based on your company’s customer needs, maximizing profits, identifying potential customers etc. In our scenario, Redeem company is interested in identifying and categorizing restaurants and hotels based on revenue per user. It helps to segment the restaurants & hotels, and they can come up with the right strategy to maximize their earnings.2. Identify Data Sources & Variables to Capture
Once the Business problem is formulated, the next step is to identify the sources of data and the mechanism to integrate it and store it in a single place. For example, data can come from CSV files, log files, web forms, marketing campaigns, etc. It is also necessary to identify the variables which are the key to the business problem. In our case, Redeem is capturing the relevant variables(Revenue, Number of user check-in, Customer & User details). This data is captured from the app whenever any user check-in and then it is saved in the Google BigQuery table. See the image of the BigQuery table given below:3. Data Cleaning & Organization 4. Querying the Bigquery Table
Big Query provides SQL-like syntax to query the table. We can do aggregation on columns, can join tables, and do many more tasks. BigQuery is ultra-fast and can fetch results within seconds. Here we can see that we are selecting the sum of revenue from the Revenue table.5. Insights Understanding the BigQuery Architecture
BigQuery is Google’s cloud-based data warehouse system. Its serverless architecture separated computing from storage. It means storage and computing power can be scaled up independently. This architecture provides users flexibility and control in terms of storage and cost respectively. Users don’t worry about the underlying structure, as the Google cloud manages everything. It helps users to focus on business problems without any technical expertise in databases. Data is stored in Google distributed file system, which is highly reliable and automatically scales up as per the load. Users can query through BigQuery clients (Web UI, REST API, CLI, and client-side languages). Data is queried from the storage, and then it goes into the compute part, where all the aggregation and calculations are carried out. Google’s highly reliable network connects the two parts – Storage & compute.
If we go deeper, we see that various low-level technologies like Dremel, Colossus, Jupiter, and Borg are running behind the scenes.
It is the execution engine that converts SQL queries into a tree. Reading data from the storage(Colossus) occurs through slots(leaves of the trees). Aggregation is carried through mixers(branches). Jupiter is the connection between leaf nodes and the colossus and is a reliable network system. It also dynamically provides slots so that a single user can get multiple slots for a query.
It is Google’s distributed file system. It provides enough disk to a user. It also handles replication and recovery during disk crashing. It stores the data in a columnar and compressed format, which helps in space optimization and low cost of running.
It is a large-scale cluster management system. It protects from failures like machine crashes, and power supplies fail, etc.BigQuery Structure
A traditional database structure is like a database, with tables and columns; similarly, BigQuery follows a structure.
As there are different departments within a company, you can have different purposes. A project can have more than one dataset.
Tables are the columnar representation of the data. A dataset can have more than one table.
In our case(Redeem): Project Name – redeemanalytics
Tables: Customer & RevenueUse Case of Customer Segmentation
Customers are very important to any organization. As a business, we uniquely fulfill our customer’s needs, generating revenue and profits. Customers’ needs vary from one to another, and it is part of the segmentation process is to group similar customers in the same group.
Redeem is a cashback company that provides cashback to users of restaurants and hotels. The company generates revenue from restaurants/hotels by providing customers with them. It is essential to identify which customers are getting more check-ins. It helps to strategize a new pricing policy.
Business Problem Formulation: How to Know the customers(Restaurants/Hotels) to whom we are providing very good business and categorize them?
Data Source: Redeem capturing the app’s customer and revenue data, then insert it in the BigQuery table. We have two tables: Customers & Revenue. Let’s have a look and do some Exploratory data analysis in BigQuery.1. How many customers do we have?
Let’s count the unique customers.
The query is quite simple. We selected distinct customers from the Customer table, and we got 1000.
`redeemanalytics.Analytics.Customer`: The sequence is – Project Name.Dataset.Table2. Revenue Generated by the Redeem per user
Restaurants/Hotels provide a certain amount for each user check-in. In this scenario, only those users will be taken into account who check in and upload the bill on the Redeem app.
Let’s see the Revenue table by selecting all the columns.
We can see that there are 4 variables: CustomerId, Month, Revenue, and Number of Users. Our variable of interest, Revenue/User, is not present, but we can create it in the big query table by the formula:
RevenuePerUser = Revenue / Number of Users3. Categorize Customers based on median
The Median is the mid-point of a series when arranging it in ascending order. We can calculate each data point’s median and how far each data point lies away from the median.
Score = (RevenuePerUser – Median of RevenuePerUser ) / Median of RevenuePerUser
We got the Median of Revenue Per User = 1791. Now we will calculate the score by the formula given above.
The result can be saved in a new table named CustmoerScore. A score of 0.5 means it lies 50% higher than the median. We can segment our customers based on the table given below:
= -0.5 Average
< -0.5 Low4. Statistics of Customer Segment
a) Find out the number of customers in the elite segment.
We will query the CustomerScore table and find the Elite Customers count.
b) Similarly, we will find the count of other segments.5. Final Summary of Customer Segment
Segment Count of Customers(% of total count)
6. Insights & Key Business Decision
Elite Customers: These customers are providing very high revenue per user. The customers of these restaurants & hotels are high-net-worth individuals looking for wonderful experiences. We can promote these restaurant offerings to users looking for quality.
Good Customers: These are somewhat below the Elite customers, and the users of these are looking for a balance between quality and quantity. We can promote a nice experience with discount offerings.
Average Customers: Users looking for discount offerings
Low Customers: free services and offering to the users.
The strategy of the Redeem based on the insight is tabulated below:
Customer Segment User Types Key Business Strategy of Redeem
Elite Very High-income individuals Promote Quality(Experience)
Good High-Income Individuals Balance of experience and pricing
Average Average- Income Individuals Discount Offerings
Low Low- Income Individuals Free offeringsConclusion
The key takeaways of the article are:
GCP- BigQuery is a cloud-based data warehouse serverless self-managed solution. It is scalable as per load demand.
It is highly useful for businesses to make data-driven decisions.
We understood the use case: Customer Segmentation with the help of BigQuery.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
You're reading Understanding Bigquery: Architecture And Use Case
Introduction to Android Architecture
Android is an operating system for Mobile devices (Smartphones and Tablets) and an open-source platform built on Linux OS. A conglomerate of Handset companies like Sony, Samsung, and Intel developed it. The Open Handset Alliance (OHA), led by Google, releases versions of the Android operating system (OS) for deployment on mobile devices.
Start Your Free Software Development Course
Android Architecture provides an integrated approach for developers to develop mobile applications that can run on any device with Android OS installed in it, and it allows the applications component to be reused and obviate the need for redevelopment. Android source codes are offered under the category of open-source license on multiple websites. Google hosts most of it under Apache License 2.0 and kernel under General public license 2.0. It also provides a robust run-time environment for the execution of apps with a powerful interaction with peripheral devices and other apps.What is Android Architecture?
Before studying Architecture, let us go through some of the features of the Android Operating system.
Android OS can be customized as needed, and hence we can notice many avatars of this OS are deployed in different mobile devices with multiple unique features.
It supports all mobile connectivity technologies, viz., Wi-Fi, CDMA, GSM, NFC, Bluetooth, etc., and basic functionalities like telephony, SMS, and data transfer. With this connectivity, data can be transferred back and forth between devices thru various apps.
It provides Interfaces (APIs) that support location-dependent services such as GPS.
SQLite database provides storage functionalities needed by Android. Being a lightweight database, it enables simpler storage and quicker data retrieval.
It supports all versions of multimedia files (Audio/Video) and integrates a Microphone, Camera, Accelerometer, and speaker for effective management of recording and playback operations.
Developers can use HTML5 and CSS3 to create an intuitive and impressive front-end screen.
It allows multiple windows to be active simultaneously, performing different tasks.
Graphics 2D/3D are supported.
Supports NFC technology that connects two NFC-enabled devices by touching each other.
Other features include multi-language support, User-adjustable widgets, and Google Cloud messaging.Architecture:
It consists of several software modules to support the functioning of mobile devices. These software modules mainly contain the kernel and set of Libraries that facilitate mobile application development, and they form part of the runtime, application framework, and the actual application.
The application modules are grouped into five sections under four different layers.
Android runtime layer has two sections, namely DVM and Libraries, and all the layers have only one section each.1. Application Layer
The application layer is the topmost layer in the architecture, and it is the front end for the users. Native applications developed using Android architecture and third-party applications are installed in this layer. Applications from this layer get executed with the help of the run time layer using the classes and services provided by the framework layer. Example of Application is Email, Contacts, Calendar, Camera, Time, Music, Gallery, Phone, SMS, Alarm, Home, and Clock.2. Applications Framework Layer
The applications Framework layer holds the classes needed to develop applications in the Android platform. It enables access to hardware, handles the user interface, and manages resources for an application. The services provided by this layer are made available to the application layer for development as a class. Some of the components in the framework layer are NFC service, Notification Manager, Activity Manager, Telephony service, Package Manager, and view system, and used in application development as needed.3. Android Runtime Layer
Android Runtime layer is vital to this OS, containing sections like Dalvik Virtual Machine (DVM) and Core libraries. This environment provides basic power to the applications with the help of libraries. Dalvik virtual machine exploits the basic inherent power of Java language in managing memory and multi-threading options to provide multiple instances to Android OS and ensure that it runs effectively. It leans on Kernel for threading and OS-level functionalities. This layer provides the services of Zygote to handle the forking of the new process, Android debug bridge, etc. Core Libraries provide features of Java language for the development of applications in Android OS.4. Kernel Layer Framework of Android Architecture
The application framework provides Java classes for application development. Developers use these Java classes during coding. This component provides the following services.
Activity Manager: Manages the application’s lifecycle and tracks all the activities.
Content Provider: Facilitates sharing data with external applications.
Resource Manager: Enables applications to use other resources like color settings, user interactions and strings.
Notification Manager: Manages alerts and notifications to users on the status of application execution.
View system: Provides various view options for creating user interaction.Android Architecture Libraries
Some of the components in this library are:
1. Media framework to manage Audio and video recording and playing.
2. Surface Manager to monitor display functionalities and text manipulation during display.
3. SQLite for Database management.
5. Freetype supports the front end.
6. Web-Kit supports browser functionalities.
7. Readily available Widgets such as buttons, layouts, radio buttons, and lists.
8. SSL provides internal security.
9. Interfaces and other services:
Access to OS services for communication across processes.
Access to App model templates for easy development
Enables content access and interactions across applications.Conclusion
In summary, Android Architecture provides a robust framework, interfaces, and libraries for developing and executing superior applications on mobile devices. It fully uses unique features of Android, such as Open source, Community support, Effective marketing, Low cost of development, a Rich environment for app development, and Solid inter-app and intra-app interfaces.Recommended Articles
This is a guide to Android Architecture. Here we discuss the introduction, architecture, framework, and Android architecture libraries. You can also go through our other suggested articles to learn more –
A data lake is a centralized and scalable repository storing structured and unstructured data. The need for a data lake arises from the growing volume, variety, and velocity of data companies need to manage and analyze. Data lakes provide a way to store and process large amounts of raw data in its original format, making it available to many users, including data scientists and engineers. Data Lake is a flexible, cost-effective way to store and analyze data and can quickly scale and integrate with other data storage and processing systems. This allows companies to achieve better insights from their data and better decision-making.
In this article, we will:
Understand what cloud storage and bigquery services are and what is used for.
Go through the instructions on setting up the data lake in the GCP using cloud storage and big service.
Get the list of companies using these services.
The security and governance in GCP.
This article was published as a part of the Data Science Blogathon.Table of Contents Overview of GCP’s Cloud Storage and BigQuery services
Google Cloud Platform’s (GCP) Cloud Storage and BigQuery services are powerful data management and analysis tools. Cloud Storage is a fully-managed, highly scalable object storage service that allows storing and retrieving data from anywhere. BigQuery is a fully-managed, petabyte-scale data warehouse that supports fast SQL queries using the processing power of Google’s infrastructure. GCS and BigQuery together make GCP a scalable data lake that can store structured and unstructured data. A data lake on GCP allows you to store raw data in its original format and then use BigQuery for interactive analysis while leveraging other GCP services like Cloud Dataflow and Cloud Dataproc for data transformation and processing. Additionally, GCP provides security and access controls for data stored in a data lake, allowing you to share data with authorized users and external systems.Step-by-Step Instructions to Set Up a Data Lake on GCP with Cloud Storage and BigQuery
A data lake on GCP using Cloud Storage and BigQuery can be set up by following these steps:
Load Data into Cloud Storage: There are several ways to load data into Cloud Storage, including uploading files, using the command-line tool, or using the Cloud Storage API.
Load data into BigQuery: There are several ways to load data into BigQuery, including using the BigQuery Web UI, the BigQuery command-line tool, or a BigQuery client library. When you load data into BigQuery, you can choose to append new data to an existing table, overwrite existing data, or create a new table with each load.
Perform Data Analysis and Visualization: Once the data is uploaded into BigQuery, you can analyze it using SQL queries, create reports and dashboards using Google Data Studio, or use machine learning models in BigQuery ML. You can visualize the data using GCP’s built-in visualization tools, like Data Studio, or integrate with other BI tools, like Tableau or Looker.
Set up Data Management and Access Controls: It is important to set up a data management strategy to ensure the data in your data lake is organized, protected, and maintained. The access controls ensure that only authorized users can access and modify the data in the data lake.
Following these steps, you can set up a data lake on GCP using Cloud Storage and BigQuery. Thus, large amounts of structured and unstructured data can be stored to analyze and visualize your data.Examples of Companies Using GCP Data Lake
A data lake on GCP using Cloud Storage and BigQuery can provide many benefits for companies looking to store, process, and analyze large amounts of data. Many use cases and examples exist of companies successfully using GCP data lakes to gain insights and drive business value. Some of them are as follows:
Retail companies use GCP data lakes to analyze customer purchase behavior, while media companies use data lakes to analyze viewer engagement.
Financial services companies use GCP data lakes for fraud detection and compliance reporting.
Healthcare companies use GCP data lakes for population health management and precision medicine.
E-commerce companies use GCP data lakes for customer behavior analysis and personalized recommendations.
Travel and transportation companies using GCP data lakes for route optimization and passenger management.
Telecommunications companies use GCP data lakes to monitor the performance of network and customer experience management.An overview of GCP Data Lake Security and Governance
Security and governance are essential for setting up a GCP data lake using Cloud Storage and BigQuery. Here are a few best practices to keep in mind:
Data encryption: All data in a data lake should be encrypted in transit and at rest. GCP has various encryption options, like customer-managed encryption keys, to ensure that data is protected.
Access control: Ensure only authorized users can access data in the data lake. The Identity and Access Management(IAM) service control access to data and resources.
Data governance: Implement policies to ensure data is accurate, complete, and consistent. This includes monitoring data quality, tracking data lineage, and controlling data access.
Compliance: Ensure that the data lake meets regulatory requirements for data storage and processing. GCP has a variety of compliance certifications, like SOC 2, to meet the needs of different industries.
Auditing: Implement auditing and logging to track data access and monitor Data Lake activity. GCP’s Stackdriver service monitors and analyze logs.
You can ensure the security and compliance of your GCP data lake by following these best practices.Conclusion
In conclusion, for businesses looking to store, process, and analyze large amounts of data, GCP’s data lakes can provide a powerful and flexible platform to them. By using Cloud Storage and BigQuery, companies can easily ingest and store data from varying sources and then perform analytics and visualization to gain insights and drive business value.
The key takeaways of this article are given below:
The data lake set up on GCP using Cloud Storage, and BigQuery is a scalable, flexible, and cost-effective data storage and processing solution.
Cloud Storage is the primarily used data storage in a data lake set up to store large amounts of raw and unstructured data.
BigQuery, on the other hand, is used for data analysis, processing, and querying.
In the future, data lakes on GCP will continue to evolve and provide new and innovative ways for companies to gain insights from their data. As data becomes an increasingly valuable asset for businesses, data lakes on GCP will play a critical role in helping companies to make data-driven decisions and stay competitive in today’s fast-paced business environment.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
The floor and ceiling functions are mathematically used to round numbers to the nearest integer. This comprehensive guide will explore the floor and ceiling functions in Python, understand their formulas, and discover their various use cases and applications. Additionally, we will delve into the behavior of these functions and highlight common mistakes to avoid when working with them.
Dive into the world of floor and ceiling functions in Python. Learn their formulas, implementation methods, use cases, behavior, and common mistakes to avoid. Enhance your understanding of math floor in Python and math ceiling functions.What is the Floor Function?
The floor function, denoted as floor(x) or ⌊x⌋, returns the largest integer less than or equal to x. It rounds down a given number to the nearest whole number. Let’s explore the formula and implementation of the floor function in Python.
Formula: floor(x) = ⌊x⌋Python Implementation import math x = 3.8 floor_value = math.floor(x) print("Floor value of", x, "is", floor_value) Output
A floor value of 3.8 is 3Floor Function Formula Derivation
The floor function satisfies the identity
for all integers n
A number of geometric-like sequences with a floor function in the numerator can be done analytically. For instance, sums of the form
can be done analytically for rational x. For x=1/m a unit fraction
Sums of this form lead to Devil’s staircase-like behavior.
For irrational , and
(4) (5)What is the Ceil Function?
The smallest integer bigger than or equal to ⌈x⌉ is the result of the ceil function, denoted by ceil(x) or x. A given number is rounded to the next whole number. We will discuss the formula and implementation of the ceil function in Python.
Formula: ceil(x) = ⌈x⌉Python Implementation import math x = 3.2 ceil_value = math.ceil(x) print("Ceil value of", x, "is", ceil_value) Output
Ceil value of 3.2 is 4Floor vs Ceil Function
The ceiling and floor functions will be compared in this section, along with their main similarities and situations to apply both.
While the ceiling function rounds up to the nearest integer, the floor function rounds down to the closest. For instance, the it yields 3 for the integer 3.5, but the ceiling function returns 4.
The floor function is handy when values need to be rounded down, like when determining the number of whole units. On the other hand, the ceil function is handy when rounding up is required, like when allocating resources or determining the minimum number of elements.
Also Read: Functions 101 – Introduction to Functions in Python For Absolute BegineersUse Cases and Applications Floor Function
Explore real-world applications of this function across various domains, including finance, data analysis, and computer science.
In finance, the floor function is used for mortgage calculations to determine the minimum monthly payment required based on interest rates and loan duration.
The floor function can be employed in data analysis to discretize continuous variables into discrete intervals for easier analysis and visualization.
In computer science, the floor function is useful in algorithms involving dividing or partitioning resources among multiple entities.Use Cases and Applications of the Ceil Function
Discover the practical applications of the ceil function in different fields, such as mathematics, statistics, and programming.
The ceil function is used in mathematics to compute the least integer greater than or equal to a given number, essential in various mathematical proofs and calculations.
In statistics, the Ceil function is employed in rounding up sample sizes or determining the required observations for statistical tests.
In programming, the ceil function finds application in scenarios such as rounding up division results, handling screen pixel dimensions, or aligning elements within a grid system.
Must Read: Data Analysis Project for Beginners Using PythonUnderstanding the Behavior of the Floor Function
Gain insights into the behavior of the floor function, including its handling of positive and negative numbers, fractions, and special cases.
It is always rounds down, even for negative numbers. For example, floor(-3.8) returns -4.
It rounds down fractions to the nearest integer. For instance, floor(3.8) and floor(3.2) return three.
It yields the same result when the input is already an integer. As an illustration, floor(5) returns 5.Understanding the Behavior of the Ceil Function
Deepen your understanding of the ceil function’s behavior, including its treatment of positive and negative numbers, fractions, and specific scenarios.
The ceil function always rounds up, even for negative numbers. For example, ceil(-3.8) returns -3.
The ceil function rounds up fractions to the nearest integer. For instance, ceil(3.8) and ceil(3.2) result in 4.
The ceil function yields the same result whether the input is an integer. Ceil(5), for instance, returns 5.Common Mistakes to Avoid While Using the Floor Function
Identify and rectify common mistakes made when utilizing the floor function in Python. Learn best practices and troubleshooting techniques.
Import the math floor in Python module before using the floor function.
Using the floor function incorrectly in situations that require rounding to a specific decimal place.
Confusing the floor function with other rounding functions, such as round or trunc.Common Mistakes to Avoid While Using the Ceil Function
Discover common pitfalls encountered when working with the ceil function in Python and acquire strategies to overcome them.
Neglecting to import the math module before using the ceil function.
Using the ceil function incorrectly when rounding down is required.
Confusing the ceil function with other rounding functions or integer division.Conclusion
In conclusion, understanding Python’s floor and ceiling functions is essential for precise number rounding and various mathematical operations. By mastering these functions, you will enhance your mathematical and programming skills. Remember to utilize these functions accurately, considering their behavior and use cases. Keep exploring and applying these functions in your Python projects to unlock their full potential.
Leverage your Python skills to start your Data Science journey with our Python course for beginners with no coding or Data Science background.
Excellence in Python is one of the most essential things for making a successful career in Data Science. Enroll in our BlackBelt Program to master python, learn best data techniques and boost your career!Frequently Asked Questions
Q1. What is the difference between floor and ceiling functions in Excel?
A. In Excel, the floor and ceiling functions round numbers down or up, respectively, to the nearest integer.
Q2. What is the floor 2.4 ceil 2.9 equal to?
A. The result of floor 2.4 ceil 2.9 is 2 and 3, respectively.
Q3. What is the function of a floor function?
A. The floor function returns the largest integer less than or equal to a given number.
What you need to know and understand Artificial General Intelligence and its capabilities
Artificial general intelligence powers intelligent machines to impersonate human tasks. Artificial general intelligence also represents general human cognitive abilities in software faced with an unfamiliar task. In this article, we will have a deep understanding of artificial general intelligence and artificial general intelligence capabilities.
AGI is defined as powerful artificial intelligence (AI). The application of artificial intelligence to specific tasks or problems is referred to as weak or narrow AI. Narrow artificial intelligence is demonstrated by IBM’s Watson supercomputer, expert systems, and self-driving cars.What Are the Capabilities of Artificial General Intelligence
An intelligent system with comprehensive or complete knowledge and cognitive computing capabilities is referred to as AGI in computer science. As of now, there are no true AGI systems; they are the stuff of science fiction. In those terms, the performance of these systems is indistinguishable from that of a human. However, because of its ability to access and process massive data sets at incredible speeds, AGI’s broad intellectual capacities would exceed human capacities.
True AGI should be capable of performing human-level tasks and displaying abilities that no existing computer can. AI can now perform a wide range of tasks, but not with the level of success that would qualify them as human or general intelligence.AGI must have the following abilities:
Cause and effectThe following includes some the practical examples of AGI capabilities
An AGI system could theoretically read, improve, and comprehend human-generated code.
Fine motor skills:
This includes an example of taking a set of keys from a pocket which requires some imaginative perception.
Color recognition is a subjective type of perception that AGI would excel at. It could also detect depth and three dimensions in static images.
Understanding natural language
Understanding human language is highly dependent on context. AGI systems have the level of intuition required for NLU.
A geographic location can be pinpointed using the existing Global Positioning System (GPS). AGI would be able to project movement through physical spaces better than existing systems once fully developed.AI researchers also expect AGI systems to have higher-level capabilities, such as the ability to do the following:
Create fixed structures for all tasks
Use different kinds of knowledge
Handle various kinds of learning and learning algorithms
Engage in metacognition and help us with the use of metacognitive knowledge.
Understand belief systems and
Understand symbol systemsDifference Between AGI Vs. AI
AGI should theoretically be able to perform any task that a human can and exhibit varying levels of intelligence. In most areas of intelligence, it should be as good as or better than humans at solving problems.
Poor AI, on the other hand, excels at completing specific tasks or types of problems. Many existing AI systems self-improve and solve specific types of problems by combining machine learning, deep learning, reinforcement learning, and natural language processing. These technologies, however, fall far short of the total capacity of the human brain.AGI does not yet exist, but AI is used in a variety of situations. AI examples include the following:
Voice assistance like Alexa and Siri
Customer service chatbots
Marketing platforms used to gather customer sentiment and business intelligence
Recommendation engines such as Netflix, Google and Spotify
Facial recognition applicationsWhat Is the Future Of AGI
Many experts are sceptical that AGI will ever become a reality.
In a 2014 interview with the British Broadcasting Corporation, English theoretical physicist, cosmologist, and author Stephen Hawking warned of the dangers. “The development of full artificial intelligence could mean the extinction of the human race,” he warned. “It would take off on its own, redesigning itself at a rapid pace. Humans, hampered by slow biological evolution, would be unable to compete and would be surpassed.”
However, some AI experts predict that AGI will continue to evolve. Ray Kurzweil, inventor and futurist, predicted that computers will achieve human-level intelligence by 2029 in an interview at the 2023 South by Southwest Conference.
Another viewpoint that supports the eventual development of AGI is the Church-Turing thesis, which was developed by Alan Turing and Alonzo Church in 1936. It asserts that any problem can be solved using an algorithm given an infinite amount of time and memory. It is unclear which cognitive science algorithm will be used. Some believe that neural networks hold the most promise, while others believe that a combination of neural networks and rule-based systems holds the most promise.
A community cloud can be defined as a cloud-based infrastructure that enables multiple organizations to share services and resources derived from common regulatory and operational requirements. It offers a shared platform and resources for different organizations to work on their business requirements. It is operated and managed by community members, third-party vendors, or even both.
The organizations that share common business requirements constitute the members of the community cloud. Common business requirements comprise Shared industry regulations, shared data storage, or shared infrastructure requirements.Factors to consider before adopting Community Cloud
Community cloud helps organizations with common business concerns use the public or private cloud cost-effectively. It offers unlimited scalability and delivers faster cloud deployment
Here are the key factors that you need to consider before adopting the community cloud:
The community cloud allows individual organizations to work together.
It enables data sharing among different organizations while adhering to strict regulations and security requirements.
Service level agreements should be reviewed and understood by organizations.
The trading firms need to understand the economic model of the community cloud.
They should understand how it manages data storage and security issues.
Organizations should consider the availability/uptime of the community cloud.
Organizations should evaluate how tenants manage issues when selecting a community cloud.
Community cloud challenges should be mitigated by the cloud provider.
Sensitive data should be managed effectively by the cloud provider.Community Cloud Architecture
Members of the community cloud are essentially organizations with similar business needs. These business requirements are derived from the industry regulations along with the need for shared data and services.
Here are all the entities of community cloud architecture:
In the above image, they are represented as “Org 1”, “Org2”, and “Org 3,” respectively. Such organizations generally have shared policies and protocols.
IAM: is an abbreviation for identity and access management that provides authorization and access to the specific cloud that meets the shared protocols and policies adopted by different organizations.
Cloud manager: is an entity that becomes an interface for different organizations to manage their shared resources and protocols.
Storage requirements: Different clouds may offer separate storage in accordance with the requirements of different organizations. They are documented under the service level agreements of the community cloud.
The cloud managers have to further split the data centers’ responsibilities and costs among participating organizations.
Since implementing the community cloud could be complex, the community cloud prepares a handbook that covers the mission statement, services, and resources ownership. The handbook provides detailed information on several cloud solutions.Key Components of Community Cloud Architecture
The community cloud generally has a distributive Architecture. The components of the community cloud can be listed as shown below: –Shared Policies and protocols:
Participating organizations are able to operate and maintain the community cloud through a long-term commitment. The members have to collaborate and design the following as explained below: –
Governance Policies: A community cloud is always monitored through a governance model. In the shared cloud platform, governance policies enable effective monitoring among all stakeholders.
Security protocols: Regulations must be designed, analyzed, and interpreted periodically to ensure the community cloud’s smooth functioning. Such regulations together constitute security protocols for a community cloud.
HIPPA is an example of a security protocol that requires email encryption to meet up with standards of compliance protocols.
Access policies: Participating organizations need to document and maintain access policies. These are policies that govern who is authorized to use which resources under the community cloud.
Policies about allocation: The community cloud developers should answer all questions related to business continuity before setting up the community cloud.Cloud:
Cloud computing is a key component of any community cloud.
A community cloud is built on top of a private cloud.
Moreover, there is an off-the-shelf community cloud that is tailored to meet the needs of specific industries and government agencies.Cloud management system:
Under the community cloud, a cloud management system plays an important role. It helps in delivering cloud operations with ease.
It further runs regular updates from time to time to ensure systems are up to date. A community cloud also needs more specific controls for resource allocation, app and data management as well as placement of security protocols.Identity and access management system:
The identity and access management system helps in the identification of multiple users that are a part of different organizations and would access the community cloud.Data governance tool:
The community cloud offers a tool that supports data governance. It oversees the creation, updating, and deletion of the data.
These activities are performed per the data policies already agreed upon and defined between stakeholders.Shared application service:
This is one of the crucial components of a community cloud. It primarily looks into getting common services and applications. Different departments working under the same organization utilize these clouds.
The key building step in the community cloud is Resource allocation. The specifics of bandwidth and storage should be carefully considered while allocating resources for the community cloud.Advantages of Community Cloud
The community cloud offers many benefits, as described below: –
Cost efficacy: The community cloud allows multiple and different users to connect with the same environment. The sessions of such users are arranged logically. The system ensures that there is no additional need for separate servers. It offers cost-effective solutions for organizations.
Regulatory compliance: Regulatory laws that govern privacy tend to evolve every second. They tend to vary at the national, regional, and global levels.
High unlimited scalability and availability: The cloud community offers the same level of availability and scalability as cloud services. There is no downtime under community cloud operations.
Security requirements per industry standards: Community clouds typically provide the expertise that meets industry security standards.
More and Better Control: A community cloud is designed in such a manner that it combines the best features of both public and private clouds. It also offers off-site back up at regular intervals.Community Cloud Use cases & Examples
A number of industry-specific community clouds are featured in several use cases telling their success story. There is an exponential growth in demand for the usage of community clouds. Various cloud service providers are now delivering solutions based on the community cloud model.
Finance: The community cloud has become a popular model for financial institutions to manage sensitive customer information and applicable monetary transactions. Models like these are designed to meet the security and compliance requirements of financial institutions.
Public and Government sectors: The community cloud model has become popular among government departments for managing sensitive communication and infrastructure needs.
Federal agencies generally develop highly secure government clouds which ensure that the data remains protected.
Educational institutions: The community cloud model is the most suited model for educational institutions as it allows them to share information, research material, and educational content on the cloud.
In group projects, the model can be used to facilitate question-and-answer sessions that help foster collaboration.
Health care industry: The community cloud model has use cases for the healthcare sector. This sector deals with highly sensitive information when collaborating with several pharmaceutical companies.
Community cloud helps in information sharing without disclosing any private information. Many pharmaceutical companies are part of the healthcare sector and collaborate with hospitals to provide quick healthcare solutions.Best Practices for Community Cloud
Below are the best practices that should be adopted for the community cloud model: –
Evaluation and selection of correct cloud management system: Organizations need to select robust and comprehensive management systems. It must be of high priority when selecting third-party vendors. The system should be robust enough for the administrators to identify how the storage space is being used and provide a proper audit trail for work.
Documentation on shared ownership terms: Each term and condition documented under the community cloud should be thoroughly discussed. The terms and conditions are to be approved by participating organizations before they make it to the final draft.
The service level agreement should specifically highlight the allotment of storage percentage and bandwidth for each approved member of the community cloud.
Determine the cost policies applicable to procurement of a new community cloud: Organizations must decide on who provides basic funding for the community cloud and who hires the cloud experts and integrators. There should be an additional decision on the management and terms of fund transfers. They also need to establish the metering capability to check on granular resources.
Management of security and patch requirements: Community clouds should establish security standards that match those established within their industry. Members of the community must establish their security standards.
Decision over data segmentation plan: Data segmentation is contingent on the regulations put forward and defined under the industry’s regulations. For example, a community cloud may segment and cloud resources to meet the security requirements of high-level government departments.Summary
It helps in data sharing for remote employees.
It provides separate servers for different organizations.
The community cloud offers customization that meets the security requirements and specific industry regulations.
It offers unlimited scalability and is a highly flexible solution.
The cloud providers help in the mitigation of community cloud challenges.
Community cloud is derived from the concept of cloud computing.
Community cloud provides an end-to-end integration setup.
Public clouds are cheaper as compared to the community cloud model.
Update the detailed information about Understanding Bigquery: Architecture And Use Case on the Flu.edu.vn website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!