This paper provides discussion about data mining, its dimension and its significance in the upcoming world. Generally, data mining (also called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information. It allows users to analyze data from many different dimensions or angles, categorize it and summarize the relationships identified. Although data mining is a relatively new term, the technology is not. Powerful computers were used to shift through volumes of supermarket scanner data and analyze market reports for years. The amount of data being generated and stored is growing exponentially, due in large part to the continuing advances in computer technology. Data mining is primarily used today by organizations with a strong consumer focus - retail, financial, communication, and marketing organizations. It enables these companies to determine relationships among "internal" factors such as price, product positioning, or staff skills, and "external" factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. Finally, it enables them to "drill down" into summary information to view detail transactional data (Ref: Vikrant Dewan, Nipun Yadav, Neha Yadav. Knowledge discovery in databases (data mining): a perspective. Discovery Engineering, 2013, 2(8), 69-73), (Image: datamining.typepad.com).
Software systems come and go through a series of passages that account for their inception, initial development, productive operation, upkeep, and retirement from one generation to another. This article categorizes and examines the area of software development through the various development models. It represents five of the development models namely – Waterfall model, Incremental model, V- shaped model, Spiral model and RAD model. These models have their own advantages and disadvantages as well. Therefore, the main objective of this research paper is to introduce the concept of software process and represent different models of software development and make a comparison between them to show the features and defects of each model.
The intention of this paper is to provide an overview on the subject of compiler design. The overview includes previous and existing concepts, current technologies. This paper also covers definition, design, overview, and advantages of compiler and its different parts. Through this paper we are creating awareness among the people about this rising field of compiler design. This paper also offers a comprehensive number of references for each concept in Lexical Analysis.
Cluster computing is an increasingly popular high performance computing solution. It has become a popular topic of research among the academic and industrial communities, including system designers, network developers, algorithm developers, as well as faculty and graduate researchers. This paper focuses on the importance of Cluster Computing giving its overview and a brief history. Various architectures like Shared-Disk Clusters, Shared Nothing Clusters and Mirrored Disk Clusters as well as the complete implementation process have been discussed in between with all the pre requisites required to create a computer cluster. The paper also offers various existing examples of computer clusters and benefits of the technology.
Socket programming provides the communication mechanism between the two computers using TCP. A client program creates a socket on its end of the communication and attempts to connect that socket to a server. When the connection is made, the server creates a socket object on its end of the communication. The client server can now communicate by writing to and reading from the socket. The java.net.Socket class represents a socket, and the java.net.ServerSocket class provides a mechanism for the server program to listen for clients and establish connections with them.
Corporations have been amassing data for decades, building huge data warehouses in which to store it. Even though this data is available, very few companies have been able to realize the actual value stored in it.
Although data mining is a relatively new term, the technique is not. Corporations have used powerful computers to sift through volumes of supermarket scanner data and analyze market research reports for years.
In the last few years there has been a rapid exponential increase in computer processing power, data storage and communication. But still there are many complex and computation intensive problems, which cannot be solved by supercomputers.
This paper provides discussion about data mining, its dimension and its significance in the upcoming world. Generally, data mining (also called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information.
The paper introduces the data warehouse and the online analysis process. Data warehouse provides an effective way for analysis and statistic to the mass data, and helps to do the decision-making.
Managing main memory, especially it is needs to be shared among multiple concurrent tasks or even different users, is a challenging problem. While the size of memory has been increasing steadily, and modern CPUs now use long address fields capable of generating very large address spaces, the amount of memory available and actually installed in most computers remains a limiting factor.
In software engineering software quality has become a topic of major concern. As software is becoming critically important for an organization to be competitive in its business, the requirement that the software is highly supportive for the organization in achieving its goals means that the software should have high utility and user quality. It has also been recognized that software maintenance is becoming the main activity in software work.
Programming languages are notations for describing computations to people and to machines. The world as we know it depends on programming languages, because all the software running on all the computers was written in some programming language. But, before a program can be run, it first must be translated into a form in which it can be executed by a computer.
A shared-memory multiprocessor consists of a number of processors accessing one or more shared memory modules. The processors can be physically linked to the memory modules in a variety of ways, but logically every processor is associated to every module.
JDBC (Short for Java Database Connectivity), from Oracle Corporation, is a Java-based data access API for the Java programming language which provides details about how a client may access a Database. It gives methods for updating as well as querying data in a database.
In computer networking, a single layer-2 network may be partitioned to create multiple distinct broadcast domains, which are mutually isolated so that packets can only pass between them via one or more routers; such a domain is referred to as a Virtual Local Area Network, Virtual LAN or VLAN. This is usually achieved on switch or router devices.
In this communication, we explore research carried out in the field of face recognition, with the ultimate aim of producing a highly effective face recognition algorithm, for use in such application areas as secure site access, suspect identification and surveillance.
Risk management is the process of measuring, or assessing risk and then developing strategies to manage the risk. In general, the strategies employed include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk.
This paper discusses budding opportunities for natural language processing (NLP) researchers in the expansion of educational applications for writing, reading and content knowledge acquisition. A brief historical viewpoint is provided, and on hand and emerging technologies is described in the context of research related to content, syntax, and discourse analyses. Most NLP applications such as information extraction, machine translation, sentiment analysis and question answering, require both syntactic and semantic analysis at various levels. Conventionally, NLP research has focused on developing algorithms that are either language-specific and/or perform well only on closed-domain text.