The Concept of Self-Expansion and the Phenomenon of Digital Moksha (Part Two)
Tauhid Nur Azhar
In part two, we will try to discuss concrete examples of how the phenomenon of digital moksha is beginning to occur in our lives.
This story is about me personally. Yesterday morning, I casually went to the train station and leisurely entered the gate to board the train without having to show my boarding pass or even my identity. The face recognition system had recognized me as a legitimate passenger. In other words, the system had recognized me as a series of data that produces a unique biometric character that matches the single ID number and authority to enjoy the service as recorded in the system post-transaction.
Who I am is no longer as important, only a series of numbers and specific ratios that represent a virtual digital identity.
Material value and value in the context of axiology, or the value that axiology aims to achieve, is the truth and benefit that exists in a certain knowledge.
Following the definition of axiology itself, which comes from the Greek words axion and logos, meaning the theory of value.
That value can be executed accurately, quickly, accurately, with a significant impact and great implications by utilizing technology carefully.
During the train journey, I spent my time by making online orders for Mangut Lele Girli Kutoarjo rice and Yu Gembrot Madiun pecel, both of which will be served according to the predestination station that will be passed. Once again, my identity and transaction are authorized by a series of binary numbers that become the representation of my presence.
While waiting for the pecel to be served, I played with generative AI applications such as invideo and several other items. I created many stories through prompts that became an imagination bridge that crystallized as a creative product. Maybe it could also be an innovation for solutions.
I created a story about a prehistoric history that I imagined through words related to semiotic symbols that can be understood as natural linguistic, natural linguistics that can later be learned and programmed to be replicated and reproduced in various forms that have been magnified.
Then, to make the story short, if I arrived at Tugu Yogyakarta station, then sneaked into the secret tabir to a genomic research laboratory and carried out the process of sample collection for whole genome sequencing examination, then when the data has been processed perfectly, my entity in the DNA version, besides being identified, has also become a representation of me in a future biodigital ecosystem.
A virtual holobiont built by storage technology, as my digital data is stored and processed by PT. KAI in its Oracle Exadata, which works as a cloud services solution offering optimized infrastructure for databases.
Where each storage services system focuses on high performance, reliability, and scalability.
As knowledge, Oracle Exadata uses a unique architecture that combines servers, storage, and networks into one integrated network. This includes database servers, storage servers, and networking components that are configured in an integrated manner.
Oracle Exadata consists of several main components, including Database Servers (DB Servers) that handle data processing. Then there are Storage Servers (Cell Servers) that store data and are responsible for storage operations.
There is also an InfiniBand Network that connects the database and storage servers for high-speed data transfer. Additionally, it is equipped with a Software Stack such as Oracle Database, Exadata Storage Server Software, and an optimized operating system.
Storage services certainly require infrastructure support and connectivity guarantees, so Oracle developed the InfiniBand Fabric that connects all components in Exadata for fast communication.
Additionally, it is equipped with an External Network that allows access to and from Oracle Exadata from an external environment.
Oracle Exadata uses a combination of disk storage and flash technology to provide high performance. Where disks are used for data storage that is less frequently accessed, while flash is used for frequently accessed data.
Data processing is also key to smart data storage services. The smart aspect is realized through the application of various technologies such as Smart Scan Technology, which allows for efficient data processing by scanning only the parts that are needed.
The advantage is that the system can reduce the amount of data that needs to be transferred through the network, significantly increasing performance.
Of course, the factor that is very important to consider, along with the issue of data leaks that have surfaced some time ago related to the threat of data ransomware attacks, is the aspect of data storage and network security.
For this reason, Oracle Exadata as a Cloud Services provider has tried to develop and apply high-level security features, including data encryption, strict access control, and a security monitoring system.
But is Exadata impenetrable? That’s a different question, isn’t it?
So, the storage of data, capacity, infrastructure support, and analysis system to optimize its use can be likened to a digital wild west that is beginning to be visited by new hybrid creature colonies.
My data from the face scan process at the station gate is the key to opening the portal to the cross-dimensional dimension where my digital body is located in parallel.
Then what and how is the “geomorphology” of our new world? How can the journey of data and connectivity through asynchronous communication devices like the internet evolve and construct new terra incognita areas like Ptolemy’s? Areas that have never been on the map of civilization before. Areas that can only be reached by boarding a train that will only stop at platform 9¾ as depicted by JK Rowling.
Cloud Services that have stored my digital body are actually a model of service provision through the internet that allows flexible and measurable access to computing resources, storage, and applications.
Cloud services technology is developed in stages and in a structured manner through a systemic growth approach. Its mandatory domain includes features such as Infrastructure as a Service (IaaS), which provides access to fundamental computing resources such as virtual servers, storage, and networks. In this context, users have full control over the operating system and applications that are implemented.
Cloud and cloud computing systems also have Platform as a Service (PaaS) features that provide a development and application management environment. Users can focus on application development without having to manage related infrastructure.
Then there is Software as a Service (SaaS) which can present software applications through the internet, where users can access applications without having to think about installation, maintenance, or infrastructure management.
The benefits of developing a storage and computing system in the cloud concept include Automatic Scalability, which allows for automatic adjustment of resource capacity according to needs, whether increasing or decreasing workload.
There is also Elasticity, which is characterized by the ability to adjust the size of computing resources over time, providing a fast response to changes in business needs.
Cloud services also provide access and independence in the form of Self-Service and Usage-Based Financing, where users can set up and configure services independently without having to interact directly with the service provider.
Of course, cloud services have advantages in Distributed Data Storage, where data can be stored in multiple geographic locations to increase availability and reliability. Cloud service providers can provide data backup and recovery automatically.
For the development and optimization of data utilization, cloud services provide APIs (Application Programming Interfaces) that allow for integration and automation between cloud services and applications or existing infrastructure.
With various functions represented in various features in cloud services, we are increasingly getting the picture that the data or digital habitat that has prepared real estate for our identity data as a virtual entity is one of the new infrastructures of the universe that we can analogize with the digital capital of the archipelago.
Then is it just our biometric data that can be stored? Of course not. Biometric and genomic data are just some of them. Transaction data, social credit data, preferences, cultural behavior, social interaction models, health data, and any data that we produce in life can be there. Isn’t the material world now filled with sensors and information acquisition systems that can become enablers in the formation of the anatomy and physiology of our digital body?
If we continue our journey after getting off the train at Tugu Yogyakarta station, the results of the genome examination that we have carried out can certainly be uploaded to the cloud storage system.
With certain authorization that can be arranged through regulations, and even become a variable obligation that must be fulfilled, our genomic data can be analyzed and mapped various facts, among others as part of the process of mitigating potential health disasters.
Artificial intelligence, up to this recent condition, has been able to play an effective role in analyzing, making associations and correlations, and even being involved in the design and procreation process.
Regarding the presence of mega or big data in the cloud system and support for cloud computing processing systems has pushed AI to be used as a tool for analyzing big data. Some relevant AI models for this function include, among others:
Machine Learning (ML) Models such as Regression Models can be used to understand the relationship between variables and predict values based on historical data. Then there are Classification Models that can classify data into categories or groups based on existing features.
Next, there are Clustering Models that can group similar data based on patterns or specific characteristics. Meanwhile, Deep Learning Models have several subsets of Neural Networks such as deep neural networks that can be used for tasks such as image recognition, natural language, and image processing.
From the same genre, there are Recurrent Neural Networks (RNN) that are suitable for analyzing sequential data such as text or time series. There are also Convolutional Neural Networks (CNN) that are highly effective in analyzing image or visual data.
Next, which has become very popular, are Natural Language Processing (NLP) Models with more applicative functional derivatives such as Sentiment Analysis Models that can identify sentiments or feelings behind text, commonly used to analyze customer reviews or public opinion.
Then there is Named Entity Recognition (NER) that can detect entities such as people, places, and organizations in text. Other AI models for data analysis include Ensemble Models such as Random Forests and Gradient Boosting which are combinations of models to increase accuracy and prediction performance.
Then there are Stacking Models that can combine the output of several models to improve prediction quality. There is also Association Rule Mining with the Apriori Algorithm that is suitable for use in finding association patterns in data, especially useful in consumer behavior analysis or purchasing patterns (marketing domain).
To see trends and patterns of behavior, there are Time Series Forecasting Models such as ARIMA (Autoregressive Integrated Moving Average) that can be used to predict time series data with trend and seasonal patterns. There is also LSTM (Long Short-Term Memory), a type of neural network that is good at predicting time series.
The next development presents Graph Analytics Models such as the PageRank Algorithm that can be used in social network or web structure analysis to measure the importance of nodes in a graph. Then there are Community Detection Algorithms to identify groups in social networks or graphs.
In the realm of decision making or decision support systems, there are Reinforcement Learning Models such as Q-Learning and Deep Q Network (DQN) that can be used for automatic decision making based on trial and error processes.
The choice of AI model depends very much on the type of data being analyzed, specific tasks, and the purpose of the data analysis process. It is quite possible to perform several combinations of several models or AI-based analysis techniques to produce optimal conclusions.
Then with the implementation of AI technology in the data processing process that I have acquired through genomic examination and is now stored neatly in the cloud, what can be done with my genetic data? What kind of processes can be imagined to be carried out?
Currently, to analyze the results from genome sequencing, several common data processing models are used, which involve the following processes:
Preprocessing Data, which includes Quality Control (QC) to check the quality of sequence data and identify and filter inaccurate data.
The process can then be continued with Trimming and Filtering to remove low-quality or irrelevant parts of the data.
The data refinement process can then use Adapter Removal to remove adapter sequences that may have entered the sequencing results.
Next, Mapping or Alignment can be done through processes such as Read Mapping to map DNA sequences to reference genomes to determine location and relative position.
This can be followed by the Variant Calling stage, which can detect genetic differences or variations such as SNP (Single Nucleotide Polymorphism).
We can then do the Assembly process through mechanisms such as De Novo Assembly to form new genomic sequences without using a reference. This process can be used when references are not available or not suitable for needs, or even for genetic engineering purposes.
To filter the results of sorting or manipulating the genome, Variant Annotation can be done with methods such as Functional Annotation, which can provide information about the functional effects of genetic variations such as changes in protein structure or effects on gene regulation.
These changes can become clues for Pathogenicity Prediction, which is intended to identify the potential pathological impact of genetic variation.
Structural Variant Analysis can also be done with methods such as Detection of Structural Variants, which can identify large changes in the genome structure such as translocations, duplications, or deletions.
Another method is Copy Number Variation (CNV) Analysis, which can be used to analyze variations in the number of copies of genes or certain sequences.
My genomic data in the cloud can also be used to build genealogy diagrams about origin and kinship, through Phylogenetic Analysis with methods such as Evolutionary Tree Construction, which is suitable for building evolutionary trees to understand evolutionary relationships between species or individuals.
Next, from my genomic data, Functional Genomics tests can be done with methods such as Gene Expression Analysis, which can measure the level of gene expression to understand genetic activity at the transcriptional level.
Then there is the Proteomics Integration method, which can integrate genomic data with proteomic data for a more holistic understanding.
The concept of genome engineering and optimization can be continued by applying Machine Learning for Genomic Analysis using Predictive Modeling, which uses machine learning algorithms to predict certain genomic aspects such as genetic diseases or responses to treatment.
Next, Clustering and Classification are used to group individuals based on genomic patterns or classify genetic variation types.
Next, my genomic data can be processed using mechanisms such as Data Integration, such as Integrating Multi-Omics Data, which can combine data from various molecular levels such as genomics, transcriptomics, and proteomics for a more comprehensive understanding.
My genomic data can also be displayed and simulated with Visualization and Interpretation models using Genome Browser Tools, which can visualize and interpret genomic data with the help of tools such as intuitive genome browsers for more intuitive understanding.
So, if this “me” has a parallel identity in terms of anthropometric and genomic aspects as well as other social aspects in the digital world, then where is this “me” heading? This will be an interesting topic of contemplation for us together.