DBSysWeek15PerRes.txt

    Discussion Post 1How do you define big data? What are the implications of the three industry trends (Three V’s), presented in the textbook, by the Gardner Group? The three V’s presented in the textbook are as follows: Volume, Velocity, and Variety.Big data represents the non-tactics and technology required to collect, organize, process, and extract insights from big datasets. While the difficulty of working with data that exceeds an individual computer's computing capability or storage capacity is not new, the pervasiveness, scale, and utility of this form of computing have risen exponentially in recent years. Dealing with large data requires the same basic skills as working with any other dataset. However, the huge scale, the speed with which data is received and processed, and the characteristics of the data that must be dealt with at each stage of the process all provide substantial new hurdles when it comes to building solutions. Most big data systems aim to uncover insights and connections from vast amounts of datasets that would be impossible to uncover using traditional methods.VolumeBig data systems are defined by the enormous volume of data processed. These datasets can be hundreds of times larger than standard datasets, necessitating additional attention at every level of the processing and storage life cycle. Because the task needs sometimes exceed the capability of a single computer, pooling, assigning, and coordinating resources among groups of computers becomes a challenge. Cluster management and algorithms that can divide down jobs into smaller chunks are becoming more significant.VelocityData is constantly pouring into the system from a variety of sources, and it is typically expected to be analyzed in real-time to obtain insights and update the current understanding of the system. Many big data practitioners have moved away from batch-oriented approaches and toward real-time streaming systems as a result of this concentration on near-instant response. Data is continually being added, massaged, processed, and evaluated to keep up with the influx of new information and expose useful information early when it is most relevant. To protect against failures in the data flow, these concepts necessitate resilient systems with highly accessible components.Varietybig data problems are frequently unique because of the large diversity of sources being analyzed as well as their relative quality, Internal systems, such as application and server logs, social media feeds and other external APIs, physical device sensors, and other sources can all be used to gather data. Big data aims to handle potentially relevant data by merging all data into a single system (Ellingwood, 2016). ReferenceEllingwood, J. (2016, September 28). An introduction to big data concepts and terminology. DigitalOcean. Retrieved April 12, 2022, from https://www.digitalocean.com/community/tutorials/an-introduction-to-big-data-concepts-and-terminology==========================================================================================================================================Discussion Post 2Big data refers to data sets that are too large and complex to prepare routine Information and Information for executive applications. Big Information became more widely known with the advent of wearable innovation and the Internet of Things as individuals provided increasing amounts of Information with their devices. Consider Information generated by geo-location regulators, Internet browser accounts, online media movements, or healthcare applications (Mann & Hilbert, 2020). Massive Information can be characterized as huge volumes of information that can be broken down computationally using discrete programming and procedures to discover designs, knowledge, patterns, etc. These data sets are so large that conventional information processing programming cannot process them. However, when processed with precision, these huge datasets can help solve various problems, without which they would be inconceivable (Mann & Hilbert, 2020).Big data is a huge set containing a larger variety of data appearing at higher speeds in ever-expanding volumes. This is also known as the three Vs. of big data.Volume: The volume of Information infers the measure of the Information. For the most part, large volumes of thin unstructured Information are prepared in Big Info. This Information can come from online media, websites, apps, implanted gadgets, etc. Today's unions are expected to have a few small bites of information about their workers. This Information helps associations shape their future and activities. It also contributes to the next promotion. Today, the test with the mass of Information does not have much space because it is a question of how to distinguish the Information applied in the huge information indexes and use them.Velocity: Velocity is the rate at which Information is obtained. Different stages experience different data rates. Many web elements are continuously active, requiring ongoing evaluation and operation. At the same time, some steps obtain Information in clusters (Helen et al., 2014).Variety: Variety refers to the types of information that are accessible today. The normally collected information types are organized and transmitted within an organization, such as databases, CSV documents, etc. In the abundance of Information, new unstructured Information such as works, audios, videos, etc., is also included, requiring more work to test it and derive important knowledge. Therefore, the huge amount of information is a method to study new Information and find a better approach to processing information in the future (Helen et al., 2014).ReferencesHelen; Lane, Julia; Stodden, Victoria; Bender, Stefan; Nissenbaum, Helen (2014). Big Data's End Run around Anonymity and Consent.Mann, S., & Hilbert, M. (2020). AI4D: Artificial Intelligence for Development. International Journal of Communication, 14(0), 21.==========================================================================================================================================Discussion Post 3Big Data From Elmasri & Navathe’s (2016), we can define big data as datasets whose size exceed the typical ability of a database management system (DBMS) to capture, store, manage and analyze the data. They consider big datasets to be in the range of terabytes, petabytes or exabytes. The Gartner Group characterizes big data by volume, velocity and variety. Together, these three industry trends map and influence applications and tools related to big data technology (Elmasri & Navathe, 2016). Volume: As stated earlier, big data consists of datasets greater than terabytes. An example of such large data are information collected from sensors, like those used in autonomous vehicles. The volume of data influences an organization decision making in incorporating machine-learning capabilities to identify actionable patterns to help firms alleviate cognitive burden on its decision makers (Simsek et al., 2019). Social media platforms such as Facebook and Twitter rely on these approaches to manage their millions of subscribers and their data. Velocity: When the rate at which data are generated and speed of their analysis and usage equals those of transaction speeds of stock exchanges, we consider that big data (Elmasri & Navathe, 2016). The velocity dimension is important to consider determining how organizations digest, store and managing data. In many instances, this is real-time that must be analyzed instantaneously. Big data analysis enhances the production speed of products and services (Gupta et al., 2019). Variety: Sources of big data is varied and structured in many ways. Data sources could be structured, semi-structured, unstructured or a combination of any of these types. This structural heterogeneity in a dataset means that new approaches must be formed that can handle this variety. Traditional database management systems are unable to process this type of data because they typically manage structured data. Many challenges with today’s data is its undefined structure. ReferencesElmasri, R. & Navathe, S. B. (2016). Big Data Technologies Based on MapReduce and Hadoop. In Fundamentals of Database Systems (pp. 911-955, 7th edition). Pearson Company.Gupta, S., Modgil, S., & Gunasekaran, A. (2019). Big data in lean six sigma: a review and further research directions. International Journal of Production Research, 58(3), 947–969. https://doi.org/10.1080/00207543.2019.1598599Simsek, Z., Vaara, E., Paruchuri, S., Nadkarni, S., & Shaw, J. D. (2019). New Ways of Seeing Big Data. Academy of Management Journal, 62(4), 971–978. https://doi.org/10.5465/amj.2019.4004

                                                                                                                                      Order Now