Introduction
Modern technologies have made it possible to put innovative solutions into practice to improve the standard of living for people.1 In order to learn more and address health-related issues, researchers looking at the development of technology have discovered and assessed sources of health information. At every level of the healthcare system, the development of integrated healthcare technology, therefore has the potential to increase productivity and improve patient's outcomes.2 Through reliable patient safety controls, all-pervasive data access, remote inpatient monitoring, quick clinical interventions, and decentralized electronic health records, the development of new electronic health (e-health) application systems can address some issues related to traditional healthcare systems (Figure 1). These systems can handle patient data and health information, improve patient outcomes, increase collaboration, save costs, and boost the general effectiveness of e-healthcare services. They can also improve patient quality of life. Furthermore, Gamberini et al.(2021) characterized e-health as a high-tech sector that represents the integration of networking, the Internet, and healthcare and significantly helps the system's consumers and stakeholders.
E-health is a new discipline that combines medical informatics, public health, and Internet health services.3 It embraces and promotes the global use of new technologies in order to address complex issues, reduce costs, and enhance patient care. The development of the Internet of Things (IoT), on the other hand, is the impetus behind the emergence of a plethora of smart services that offer access to shared/configurable resources, including computers, cloud computing/storage, network-connected devices, software, systems, and other resources.4 As a result, IoT-connected systems, devices, and models are now pervasive. In addition, the development of allied communication technologies, such as computing intelligence for business, industry, operational systems, and so on, has occurred at the same time as the broad adoption of IoT. Extremely effective and strong security mechanisms are required for the effective and secure adoption of health information technologies, services, and overarching e-health systems. The widespread use of IoT systems has sparked IoT technology research and development, leading to a variety of architectures for use in health networks. By connecting networks, things, apps, and services to the IoT, e-health systems may exchange relevant data utilizing cutting-edge technologies.5
The growth of e-health systems has been significantly accelerated by the use of information and communication technology in the healthcare industry (EHS). Health records are transmitted as digital assets to the appropriate parties throughout the entire e-health ecosystem thanks to the effective use of these technologies. Particularly in underdeveloped nations, the modern EHS offers an affordable solution to enhance the availability of personnel & resources and monitoring of patient conditions.6 The accessibility and caliber of services have recently been enhanced by the advent of Internet of Things (IoT) healthcare devices, commonly referred to as Internet of Medical Things (IoMT). Because of this, the global market for digital health will reach around 206 billion USD by 2020, and there will be roughly 161 million IoT devices in operation by that year alone (Figure 2).6 According to the World Health Organization, more than 80% of nations have implemented e-health programs in some capacity, and this number is anticipated to rise with the adoption of 5G technologies.7
While several industrialized countries already have national-level health services, the majority of this endeavor is focused on a national-level health service. Budgeting, staff, patients, legal conflicts, logistics, supplies, and other processes and medical workflows are frequently made up of a series of conditional phases that can be represented as a series of recurring patient-care activities. This is due to the health informatics technique. Internal controls should be strengthened, performance, compliance, and consistency should be improved, and risk, workload, and overhead should be decreased among hospitals and other healthcare service providers.8 Governments and relevant corporate sectors are getting more involved in digitizing healthcare systems, as seen by several projects in various nations and economies. By integrating blockchain, artificial intelligence, and other readily available technologies, the road to success is to integrate technology into each company's DNA.9 The industry would employ technology to provide user- and customer-centric interfaces and data-driven decisions for novel data processing methodologies and improved outcomes in order to enhance medical research and achieve patient-centricity. Artificial intelligence (AI), for instance, could assist in identifying and prioritizing certain patients for drug monitoring and growth, vital for regulated drug production and quicker turnaround times.10 Clinical trial data was tracked utilizing numerical drug design techniques and artificial intelligence (AI) for the repurposing of commercially available drugs, investigating the efficacy of therapeutic formulations, and dose measurement. Governments must choose the best strategies for leveraging resources and advancing reform in this quickly changing environment while guaranteeing the requisite consistency, compliance, or data protection. Blockchain and AI-based security measures are displayed inFigure 3 for e-health. Complex computational tasks can be carried out by AI, which can also quickly assess huge volumes of patient information. Despite the remarkable capabilities that AI can offer, some doctors are still wary about utilizing it in healthcare, especially in roles that could affect a patient's wellbeing. After all, AI has proven that it can perform numerous dynamic and cognitive tasks faster than a person can. The ability of the auto industry to use AI to create driverless cars has already been demonstrated. The use of machine learning, however, has previously been used by other businesses to detect fraud or identify financial concerns. There are very few examples that show how advanced AI is.11
History of Artificial Intelligence (AI)
After the invention of the first computers, artificial intelligence-now considered a scientific field-rose to prominence.12 These were attributed to intelligent beings' abilities to prove hypotheses, draw conclusions, and create games. The study of the principles regulating what is currently thought of as intelligent human behaviors and the development of formal models of these behaviors are the subjects of artificial intelligence (Figure 4 ). This is currently considered as a subfield of information technology. This research eventually results in the development of computer programs that mimic these behaviors. The following intelligent behaviors are perception, learning, recognition, linguistic usage, the use of symbols, originality and resolving issues. Programs for computers are used for both experimental and practical reasons, such as sound identification (speech), shape recognition (letters, drawings, and photographs), theorem validation, running games (chess), translation between two natural languages, composition of music, the development of medical diagnostics, and developing expertise.13
John McCarthy of the Massachusetts Institute of Technology proposed the idea of "artificial intelligence" as the topic of a symposium in Dartmouth in 1956. The conference's goal was to compile and advance current research on "thinking machines." The conference was successful, and the work on projects involving artificial intelligence was substantially intensified as a result. It is difficult to define artificial intelligence exactly. The literature contains numerous definitions of artificial intelligence.1 Even yet, it's important to note John McCarthy's original definition of artificial intelligence: "the design of machines that can be considered to be analogous to the human manifestations of intelligence." Taking into account two fundamental characteristics, definitions that arose afterwards can be grouped into four categories. While some of the definitions focus on the process of thinking and reasoning, others also consider the influence of behavior (called behaviour). The success category is taken into account in artificial intelligence's second distinction. Figure 5 shows examples of the definitions used to describe the various groups.
Two distinct approaches to artificial intelligence can be identified. The first of these theories, known as weak AI (Artificial Intelligence), makes the assumption that you can create and test certain brain-related hypotheses using a computer.1 The second strategy, known as strong artificial intelligence, makes far more extreme promises about AI. Strong AI proponents contend that an appropriately programmed computer is a brain in and of itself, not just a replica of the brain.14 The development of artificial intelligence as a subject of study has a fascinating history. The neural network architecture McCulloch and Pitts proposed to create intelligence can be credited with giving rise to current artificial intelligence in 1943. A. Turing proposed the "intelligence test" seven years later, in 1950. The year 1955, when Allen Newell and Herbert Simon created "The Logic Theorist" to answer mathematical problems, is another significant year in this discipline. Many people believe that "The Logic Theorist" alone was the first AI program. The year 1958 is also a significant milestone in the history of artificial intelligence research. John McCarthy created LISP, a language that performs its function in tasks in the field of artificial intelligence, in that year (LISt Processing). It's important to remember that LISP is still in use today. He developed the ELIZA program in 1966. J. Weizenbaum is the creator of a computer program that may act as a psychotherapist and direct the dialogue with the "patient." Expert Systems were developed as a result of artificial intelligence research in the 1970s.
Nonetheless, PROLOG had already been developed prior to this. It had an internal inference mechanism. The development of expert systems was suited to this language. Colmerauer is the creator of the PROLOG programming language. Work on the DENDRAL project was completed in 1971. All expert systems were modeled after this system. The following system was developed by Edward Shortliffe and is called MYCIN. The MYCIN system's goals include the identification of bacterial blood infections and the recommendation of a medication for treatment. Stanford is where both systems were developed. Due to corporations realizing the potential of adopting artificial intelligence, the 1980s saw a significant development. Sales of hardware and software related to artificial intelligence were $425 million in the USA alone in 1986. Expert systems were developed on demand in the 1980s. Artificial intelligence research has already been put to use in real-world situations. Figure 6 illustrates the current state of artificial intelligence's major issues.
Logic puzzles and games are two examples of the logical issues that artificial intelligence is asked to solve. They make a fantastic setting for experimenting with different approaches to thinking and problem-solving. In clearly defined circumstances, such as those seen in board games, planning, and anticipation techniques are already very advanced. In logical games, computer programs produce wonderful results. The world chess champion was defeated by a computer program in 1994. Theorem proving and logical reasoning, on the other hand, are relatively simple forms of reasoning. The human mind is capable of reasoning in this way.14 Building robots, on the other hand, is related with robotics, computer vision, recognition of shapes, images, and individual subject characteristics. In order to control robotic limbs, computer programs are required to solve movement optimization problems and organize a series of actions.
It is possible to create computer programs automatically with automatic programming (such as the description of the algorithm in natural language). Automatic programming, though, needs some components of intelligent behavior to make this viable. Understanding the description, choosing the guide, and organizing your activities are some of these components.15 One of the earliest fields of AI study may be thought of as computer games. Computer games are an excellent place to test new heuristics and techniques since they allow for a quick review of the results. The best evidence of the tremendous advancement in artificial intelligence is provided by the successful chess games discussed earlier. Aside from the previously mentioned example of artificial intelligence in logical games, artificial intelligence is also employed in video games (e.g., to govern the characters' actions and behavior).
The technology used to process linguistic words is referred to as fuzzy logic. It expands the true/false, yes/no, binary notation of logic to include a partial, or even continuous, function of the truth. Expert systems that are utilized in situations where knowledge is ambiguous employ fuzzy logic technologies. J.H. Holland is the inventor of genetic algorithms. The method used to find the answers gave rise to the term "genetic algorithms." This process is comparable to how genes are mixed and selected throughout evolution.16 According to the principles of natural selection and heredity, algorithms search for more and better results. Thus, research that relies on genetics and the principles of natural selection is known as genetic algorithms.
Small software programs known as Intelligent Software Agents carry out predefined duties automatically. Intelligent Software Agents are distinguished by their ability to monitor the test environment in the background and respond to specific triggered conditions. Antivirus software is among the best illustrations of agents. While the computer scans the incoming data, they continue to exist in memory. They either warn the user of the risk or instantly delete risky files when they discover a virus. The goal of Artificial Life, often known as Alife, is to investigate "natural" life by attempting to simulate biological processes using computer systems. Artificial Life is a relatively new subject of artificial intelligence. Expert systems are one of the most obvious outcomes of artificial intelligence research. They come equipped with knowledge bases and inference systems that enable natural language questioning and response exchange. Experts in certain domains of knowledge, such as knowledge engineers, develop expert systems. The verification of the knowledge, as well as the accumulation of it in the knowledge base, is both the responsibilities of knowledge engineers. In a specific and well-defined field, expert systems can solve difficulties.
E-Health and AI
Artificial intelligence (AI) is the science and engineering of creating intelligent computer systems that can handle work-related stress from humans without consulting a supervisor.17 10 These computer systems use a range of proposed algorithms, decision-making abilities, and vast amounts of data to react to queries or deliver answers. The technologies that can help us in our daily lives are all around us.18 The device has demonstrated its incredible potential and how far it has come in recent decades through the performance of driverless vehicles, advancements in health studies, and personal wearable technology. Given the advancements made in many other industries, healthcare is likely to continue to be the first field where artificial intelligence may positively affect individual lives beyond its usability.19 According to a recent study published in Neurobiology of Aging, AI can identify signs of Alzheimer's disease in a patient's brain scans before physicians can and is being used to conduct comparison studies on scans of healthy brains and those affected by Alzheimer's disease.1 A recent study by the University of Tokyo claims that Watson, IBM's cognitive supercomputer, successfully identified a rare form of leukemia in a 60-year-old woman. Furthermore, Watson provided her with a detailed recuperation plan. For oncologists, Watson created a specific tool that assists clinical analysts in selecting treatment options based on clues. Also created was Medical Sieve, a tool for radiology and cardiology clinical decision-making.1
E-Health Features of AI
As shown in Figure 8, there are numerous more healthcare technologies where AI can be quite helpful. However, in order to continue their trip in the AI room, health organizations need to assess whether any of them can be put into practice.20 Artificial intelligence in healthcare is gaining traction in these areas.
Health-related virtual counsel
Given that the majority of teenagers, adults, and seniors in the US now own smartphones, they are expected to have access to an informative personal virtual assistant through such devices.21 Siri and Cortana and other powerful AI systems are made possible by robust systems. These structures can be quite valuable when used with healthcare applications. Drug warnings, patient education resources, and human-like experiences are a few examples of healthcare applications that can be used to analyze the current emotional state.22 The application of AI in the context of an executive secretary will have a significant impact on tracking and assisting patients with any needs while medical personnel are not present.
Sophisticated data screening and analysis
AI is capable of more than just comprehending human commands and determining the appropriate response. For example, AI has been applied in several specialized cases in oncology to assist in the identification of anomalies in X-ray and magnetic resonance imaging (MRI), in genomics to carry out sophisticated sequencing, and in precision medicine to assist in the development of highly customized treatments for specific patients.1 In the IBM Watson example, the AI has successfully increased its capacity to process both structured and unstructured patient data. IBM Watson would offer proof-based recommendations for the drug to cancer patients.23
Individual life coach
Medical practitioners who treat patients with chronic conditions are aware of the value of maintaining contact with their patients outside of the examination room. Many clinics have introduced life-coaching programs as part of their comprehensive offering. However, given the current diminishing reimbursements, it is difficult to sustain such systems due to the cost of such amenities. Nevertheless, thanks to today’s sophisticated AI capabilities and Smartphone applications, patients will be able to provide feedback on numerous data elements acquired on their phone or other connected electronics.24 Whether it applies to medicine adherence or is just a pleasant voice that promotes wellness routines and better behaviors and gives helpful reminders that may be relayed back to physicians, AI creates a customized environment for any specific consumer.
Healthcare bots
Healthcare bots, AI programs that patients can connect with to get assistance with their demands via a chat window on a website or by mobile device, are anticipated to be available soon as part of what healthcare organizations offer. Patient support is one of the most recent AI domains that is continuing to acquire recognition.25 Another application of artificial intelligence in counseling is the lack of reciprocity and transparency in treatment that should exist in a partnership between mental healthcare patients and the healthcare provider (whether it be a conversation or a psychotherapist). Bots may be used to schedule follow-up appointments with a patient's doctor online, for example. Such situations concern whether a customer receives assistance from a bot with their billing or treatment criteria.26 These use cases improve patient care, offer round-the-clock support for fundamental programs like scheduling, accounting, and other therapeutic programs, and cut down on overall hospital operating costs. The evolution of AI's current use throughout healthcare is likely to be a cautious and generally healthy framework, and it shows that AI has what it takes to diagnose a patient effectively. 27 For many healthcare providers, AI, living collaboratively with practitioners, is likely to persevere in being the current course for some time.28
Privacy and security
Federal statute protects patient health information. As AI used in medical care will need access to many sets of health records, it must adhere to the same norms that are followed for present applications and infrastructure facilities. Violations of these rules or failure to maintain their credibility may result in civil and financial penalties. Patient information will be stored in the provider's data centers or portions because most AI platforms are centralized and require large amounts of processing power. If the system was compromised, this would result in problems with data protection and significant damage.29 Healthcare AI detractors have argued that while machines are not always accurate and will occasionally let us down when they recommend the wrong medication or give the wrong diagnosis to a patient, these errors will have catastrophic results. Nevertheless, progress will eventually be made to the point where AI can be trusted after it has demonstrated its ability to protect patients and prepare for medical treatment. If the platform's error margins are lower than or comparable to those of its human contemporaries, it will be prepared to take an active role in healthcare.29 AI technology is developing quickly. Elon Musk, Bill Gates, and Stephen Hawking are just a few of the well-known scientists and public figures who have predicted that AI will be so brawny and self-conscious that it will put human interests second to its own. Before robotics turns into the enemy, there are enormous benefits of artificial intelligence in healthcare, and many doctors are embracing the technology. AI in healthcare can assist medical professionals in identifying novel therapeutic options, early cancer detection, and patient engagement. 1
AI-Based Different Healthcare Tools
The field of study known as artificial intelligence (AI), which includes machine learning, uses computer algorithms to learn from data, find patterns, and predict events.30 One of the most fascinating features of AI and machine learning is its capacity to evaluate huge and complicated data structures and build predictive models that personalize and enhance diagnosis, prognosis, monitoring, and treatment administration to improve individual health outcomes. Prediction models to support clinical decision-making have been around for a while.
Risk score for Framingham
One of the scoring methods used to determine a person's risk of developing cardiovascular disease is the Framingham risk score.1 On the Internet, a number of these rating methods can be found. Systems that calculate cardiovascular risk score estimate how likely a person is to develop cardiovascular disease over a specific time period, often between 10 and 30 years. Given that they provide information on the risk of developing cardiovascular disease, they make suggestions as to who will benefit most from preventive. Thus, cardiovascular risk ratings determine who needs to take preventive drugs like blood pressure and cholesterol-lowering treatments. To illustrate the significance of blood pressure control and monitoring for cardiovascular health and outcome prediction, coronary heart disease (CHD) cases in both men and women were completely attributable to blood pressure readings above high usual (130/85) in almost 30% of cases.1
QRISK®3
A verified risk-stratification tool, the QRISK®3 algorithm estimates a person's likelihood of suffering a heart attack or stroke within the next 10 years. The average risk of individuals who have the same risk factors as the person is shown.31 The UK National Health Service developed the QRISK®3 algorithm, which is based on data periodically gathered from hundreds of Global Positioning System (GPS) devices across the nation that have voluntarily provided data to the Q Research database for medical study. Before making any decisions regarding their health, a patient must consult with their doctor. The score's therapeutic application or abuse is not the authors' or sponsors' responsibility. For a transition period, QRISK®2 is sufficient, but QRISK®3 is superior. Since individual values were not available, we calculated the Townsend deprivation score as the median of the vigintile (equal to 20th) of the score for the area in which a person lived. QRISK3 derivation allowed cholesterol levels from after the index date to be utilized if they were measured before any event; however, we only used values recorded before the index date to avoid using future knowledge in prediction.1
MELD rating
When patients with portal hypertension underwent elective transjugular intrahepatic portosystemic shunt installation, the model for end-stage liver disease (MELD) was created to forecast survival.1 MELD was initially accepted by the United Network for Organ Sharing (UNOS) in 2002 for prioritizing patients awaiting liver transplants in the USA because to its accuracy in predicting short-term survival among patients with cirrhosis. Later, it was demonstrated that the MELD, which only considers objective criteria, is an accurate predictor of survival in a variety of patient populations with advanced liver disease. The allocation of organs for liver transplantation has primarily relied on the MELD score.32 However, in cirrhotic patients with infections, bleeding, fulminant hepatic failure, and alcoholic hepatitis, the MELD score has been shown to predict survival. Additionally, the MELD score is employed to choose the most appropriate course of action for hepatocellular carcinoma patients who are not liver transplant candidates and to identify patients who need surgery other than liver transplantation. Despite the many advantages of the MELD score, only 15 to 20% of patients' survival can be accurately predicted using the MELD score. The prediction accuracy of the model will probably rise with the addition of variables that are more accurate predictors of liver and renal function. In the future, the MELD score will be improved and confirmed.33
ABCD2 rating
Following a transient ischemic event, a clinical diagnostic tool that acknowledges the significance of diabetes in assessing stroke risk is employed outside of a hospital environment for patients at high risk of stroke.1 Score computation for ABCD2:
A: Age (greater or equal to 60 years, 1 point);
B: Blood pressure (greater or equal 140/90 mmHg, 1 point) at presentation;
C: Clinical characteristics (2 points for unilateral weakness; 1 point for speech disturbance without unilateral weakness);
D: Diabetic status; symptom duration (greater or equal 60 min, 2 points; 10-59 min, 1 point) (1 point). Total scores are 0 (low) to 7 (high), with 7 being the highest danger.
Nottingham prognostic index
A simple clinic-pathological prognostic technique for determining breast cancer risk is the Nottingham Prognostic Index (NPI).34 Early-stage and locally advanced breast cancer cases are divided by the NPI method into three or more prognostic groups (TNMstages I, II, and III).35 Even though more sophisticated models have surpassed it in some applications, NPI continues to play a significant role in clinical practice and research. NPI and related prognostic models assist patients and clinicians in clinical practice when deciding whether or not to undergo adjuvant chemotherapy following surgery. The NPI can be used to provide prognostic information by classifying individuals into prognostic groups and then applying survival estimates from earlier cohort studies. In addition to offering forecasts for particular patients, the NPI is useful in economic evaluations. In the economic assessment of innovative medicines, overall survival is a key input for decision-analytic models used to evaluate additional costs and benefits and decide whether and how to allocate limited healthcare resources to new screening and management choices. Decision-analytic models are crucial in assessing novel technologies when clinical trials are unlikely to be feasible or timely in providing convincing evidence of the impact of introducing new programs or modifying existing programs.1
XNAT
One of the most popular open-source informatics platforms for imaging research is the extensible neuroimaging archive toolkit (XNAT), which will speed up the building of AI models by offering an end-to-end development platform and facilitating quicker collaborations.1 Similar reasoning applies to clinical research inquiries. The most prevalent data types in XNAT are projects, subjects, and experiments, and they are frequently arranged in a hierarchical way. A wide range of built-in data types are available, including support for provenance and quality assurance as well as different imaging modalities. The Gradle build system is used to create plugins for XNAT, which is developed in Java. The basic configuration, including the plugin version, library dependencies, and other build procedures necessary to create a plugin from Java source code, is described in the build Gradle file. Before the compilation process and the generation of the plugin jar file, the build system confirms that all dependencies have been collected. If the plugin file is put in the right spot inside the XNAT directory structure, the instance will automatically include it upon startup.36
SwissADME
The Swiss Institute of Bioinformatics is the organization behind the Swiss Drug Design initiative, which includes SwissADME.37 1 It determines the ADME, pharmacokinetics, drug-likeness, and medicinal chemistry friendliness of a small molecule produced by any computer-aided drug designing (CADD) procedure. SwissADME's advantages include but are not limited to, multiple input methods, computation for numerous molecules, and the ability to display, save, and share results per individual. These advantages are in comparison to the state-of-the-art of available web-based tools for ADME and pharmacokinetics (e.g., pk-CSM, and ADMET SAR), and aside from specific consideration to proficient methods (e.g., iLOGP. The blood-brain barrier (BBB) and passive gastrointestinal absorption are two crucial ADME features that can be predicted using the straightforward BOILED-Egg approach. Despite the conceptual simplicity of this classification model, which simply uses the two physicochemical descriptors WLOGP and TPSA to describe apparent polarity and lipophilicity, respectively, it was created with great care to ensure statistical significance and robustness. The egg-shaped categorization plot is made up of the white, or the physicochemical region for highly probable BBB permeability, and the yolk (i.e., the physicochemical space for passive gastrointestinal absorption). The outer grey zone is made up of compounds that have features that point to limited absorption and brain penetration, and the two compartments are not mutually exclusive.1
AI's Applications in Healthcare
When recognizing an issue in its infancy, AI can be more accurate. AI has made it possible to automate the selection of chemicals and drug formulations (). Peptone uses artificial intelligence (AI) in conjunction with Keras and TensorFlow to forecast the properties and features of proteins, making it easier for specialists to plan proteins, identify issues with manufacture and portrayal, and find novel protein features. Clinical trials frequently use AI to transform various biological and healthcare data into computer models. By allowing doctors to discern between individuals' responses to medications based on their individual features, the models facilitate the widespread administration of individualized medication and treatment. Drug research is another area where AI is gaining traction. Scientists may raise their productivity, enhance lab security, keep up with crucial research topics, and keep track of the inventory with the help of Helix, an AI engine that replies to vocal questions and requests using AI.
Radiology
AI in radiology is being studied by computerized tomography (CT) and MRI scans to detect and treat patients' illnesses.1 The Radiography Community of North America claims that over the years, the importance of artificial intelligence in radiology has progressively increased, growing from 0 to 3 and accounting for 10% of all publishing. AI has proven to be effective enough for two critical aspects of oncological health: detecting anomalies and tracking advancement over time using imaging. Radiology staff must be extremely meticulous when it comes to keeping scanners clean in between patients. Due to the simplicity of equipment maintenance, chest X-rays can be a superior choice in this situation. The use of X-rays in the diagnosis of COVID-19 has, however, generated conflicting reports. Hospitals in Spain use X-rays as a standard part of their diagnostic procedure, despite the fact that other sources describe the test as insensitive.38
Screening
Recent research has suggested using artificial intelligence (AI) to characterize and evaluate the outcome of maxillo-facial surgery or cleft palate repair in terms of facial beauty or age presentation.1 An artificial intelligence device (using a deep learning convolutional neural network) was reported to be able to identify skin cancer more accurately than doctors in a 2018 issue of the journal Annals of Oncology. Human dermatologists accurately identified 86.6% fewer skin malignancies on average from the photographs than the CNN unit did (95%). Researchers presented an artificial intelligence (AI) tool that uses a Google Deep Mind algorithm capable of outperforming medical experts in the detection of breast cancer.[1]
Psychology
The use of AI in psychology is still in the proof-of-concept stage, but there is growing evidence supporting the use of chatbots that mimic human behavior and are evaluated for depression and anxiety. Challenges, meanwhile, include the possibility that for-profit businesses develop and use a number of applications for detecting suicidal behavior.39
Primary care
Primary care has been the main sector for AI technology growth. Decision-making, statistical modeling, and market analytics have all benefited from the application of AI in primary care. Healthcare professionals' views on the use of AI in primary care are fairly limited, focusing mostly on organizational and repetitive reporting duties, despite the tremendous advancements in AI technologies.1
Illness diagnosis
According to a study, numerous AI techniques, including logistic regression, machine learning, and assist vector machines, have been applied to treat various illnesses. According to their definition, each of these tactics serves as "training" to ensure that classifications are as consistent as feasible with the findings.[1] Artificial neural networks (ANN) and Bayesian networks (BN) are used to display such specifics for disease diagnosis/classification in two different ways to detect these disorders. It was discovered that ANN was smarter and more accurate at detecting diabetes and CVD.1
Telemedicine
Potential AI applications have increased along with the development of telemedicine and clinicians' ability to diagnose patients virtually. AI might make it possible for them to provide patients with remote care by tracking their information with detectors. An individual may be continuously monitored and given the chance to be alerted to changes that may be harder to distinguish from human behavior if they are wearing a wearable interface. Artificial intelligence systems that alert clinicians to potential concerns may relate the data to other data previously collected.40 Some researchers argue that the dependence on chat-bots for mental healthcare does not have the reciprocity and transparency of treatment that should take place in the partnership between the mental healthcare patients and the healthcare professional (be it a conversation or psychotherapist). Chat-bot counseling is another application of artificial intelligence. Given that the average age of the population has risen due to longer life expectancies, artificial intelligence may aid in caring for the elderly. While technology is helpful, there are still discussions about the limitations of monitoring. Devices like culture and human sensors will detect a person's daily habits and alert a caregiver if an unusual activity or a calculated vital is odd. Artificial intelligence in medicine research and development trends, as well as their implications for the future of healthcare, were discussed by Xing et al.(2021) in a report. It is abundantly evident that medicine is about to undergo a revolution as a result of the wide range and extraordinary depth of potential impacts that AI promises to have. According to Adir et al.(2020), the application of artificial intelligence to the design of nanomedicine improves therapeutic effectiveness by maximizing the material properties based on the predicted interactions of the target drug with biological fluids, the immune system, the vascular system, and cell membranes. High intratumor and interpatient heterogeneities make it extremely difficult to construct logical diagnostic and treatment platforms and analyze their results, according to numerous scientific studies.
AI in drug discovery
AI is a useful tool for a variety of drug discovery processes, including drug design, chemical synthesis, drug screening, poly-pharmacology, and drug repurposing. For quicker therapeutic target validation and structural design optimization, AI can distinguish between hit and lead compounds. The quantity, growth, diversity, and ambiguity of data, in addition to its benefits, pose significant challenges for AI.41 Millions of molecules can be found in the drug research data sets of pharmaceutical corporations, which may be too many for standard machine learning algorithms to handle. Quantitative structure-activity relationship (QSAR)-based computational models can quickly predict a large number of chemicals or surface-level physicochemical properties like log P or log D. These models are far from being able to predict complex biological characteristics like the effectiveness and side effects of chemicals. QSAR-based models struggle with challenges such as small training sets, experimental data error in training sets and a lack of experimental validations. Deep learning (DL), a recently developed AI technology, and pertinent modeling studies can be utilized to evaluate the safety and efficacy of therapeutic drugs using big data modeling and analysis in order to address these concerns. In order to research the advantages of DL in the pharmaceutical industry's drug discovery process, Merck sponsored a QSAR ML competition in 2012. DL models beat traditional ML methods in terms of predictability for 15 drug candidate absorption, distribution, metabolism, excretion, and toxicity (ADMET) data sets.42 As AI-based QSAR methodologies have developed, they can be utilized to accelerate QSAR research. QSAR modeling techniques, including as linear discriminant analysis (LDA), support vector machines (SVMs), random forests (RF), and decision trees, have been used to find potential drug candidates.
Integration of Block Chain and AI in Healthcare
By combining the two technology ecosystems, problems in AI and blockchain can be effectively fixed. AI algorithms rely on data or information to analyze, learn from, and draw conclusions. Machine learning algorithms operate more effectively when data is obtained from a data repository or a reliable, secure, trusted, and credible platform.43 Blockchain serves as a distributed ledger where information may be transmitted and kept securely using cryptography in a way that is accepted by all mining nodes. The integrity and robustness of data storage on the blockchain ensure that it cannot be altered. The outcomes may be trusted and left alone when smart contracts are used to do analytics utilizing machine-learning algorithms.21 A secure, immutable, decentralized system for the sensitive data that AI-driven approaches must gather, store, and use might be created by combining blockchain and AI.44 Using this strategy, different industries-including those involving medical, personal, banking and financial, trade, and legal data, have advanced in terms of fundamental data and information security.45 Table 1 presents some of the main advantages of adopting blockchain for AI. Through such integration, intelligent decentralized autonomous agents (or DAOs) are able to automatically and swiftly verify data, value, and asset transfers among several authorities.
Table 1
Integration of Block Chain and AI in Biomedical Research
The decentralization and acceleration of biomedical research and healthcare using blockchain and next-generation artificial intelligence technologies demonstrated how deep learning is a potent AI technique that has been successfully applied in medical diagnostics, cellular image analysis, chemical synthesis, drug classification, and other areas.46 Deep learning has been demonstrated by Bai et al.(2021) to be a potential method for drug discovery and development. In this study, a deep learning model and a conventional technique are utilized to build 3D structural ligands in the 3D pocket of protein targets using the MolAICal program. The main components of MolAICal are two modules. Building a deep learning model based on WGANs using bits of FDA-approved drugs takes up the first module. Then, 3D ligands are made in the protein pocket using the fragments that the deep learning model generated. In the second module, a deep learning model based on WGANs is trained using drug-like compounds from the zinc database, and molecular docking is utilized to assess the affinities between produced compounds and proteins. The non-membrane target SARSCoV-2 Mpro and the membrane target GCGR are used to demonstrate the drug design capabilities of MolAICal. This highlights the ability of MolAICal to produce ligands that differ in their degree of 3D structural similarity to the crystal ligand of GCGR or SARS-CoV-2 Mpro. To help researchers find and create new potential treatments, the MolAICal offers useful drug design tools. Another study by Krittanawong et al.(2020) on AI and blockchain integration can be accelerated by considerably expanding the availability of data for AI training and development, being able to exchange proprietary AI algorithms for generalization, decentralizing databases of various suppliers or health systems, and rewarding solutions that produce better results than those that do not. Personalized cardiovascular therapy may advance as a result of blockchain and AI integration. There are some concerns about their implementation; however these apps are still in their infancy. Additional study is required in areas including technological trust, whether competing businesses would want this data flow in practice, compensation, and ethical considerations. Cardiovascular illness can be predicted via a network of sensors connected to blockchain and AI. Blockchain and artificial intelligence are being utilized to distribute data via a decentralized network, enabling companies to take advantage of new lung cancer patient symptoms and instances without sacrificing their privacy concerns.47 With the use of this technique, a worldwide model can be developed through cooperatively sharing locally learned big data analytics models, such as those based on CT scans for lung cancer. Additionally, a decentralized network can be used by this cutting-edge technology to extract fresh patterns for fresh lung cancer patients.48
AI and Block Chain for Combating COVID-19
An innovative open-access method called COVID-19 provides regular estimates of new infections by nation and, under certain conditions, by geography. It is the appropriate dataset for merging blockchain and AI when combined with information on people's movements.49 From several angles, blockchain and artificial intelligence can offer workable methods to deal with the coronavirus epidemic.50 In reality, AI and blockchain have been used to combat other contagious epidemics, like Ebola, where blockchain was used to deliver vaccines and conduct real-time contact tracking, transmission pattern surveillance, and monitoring of contacts with the disease. A cryptographic component of blockchain was shown to be able to help prevent the sharing of patient data with other health entities, such as healthcare providers, in an insecure manner.51
Blockchain can actively simplify the process of accelerating medication trials, recording and tracking all fundraising actions and donations in an immutable manner, and supporting the management of outbreaks and treatments in the fight against the coronavirus. Additionally, COVID-19-like epidemics have been addressed with AI. For instance, using social media and neural network architectures based on Long Short-Term Memory (LSTM), AI techniques were utilized to anticipate the dynamics of influenza-like disease for military populations. 52 The pandemic in Wuhan appears to have been discovered by BlueDot, a Toronto-based startup that employs an AI-enhanced monitoring device, several hours before Chinese authorities and other foreign organizations and agencies.53 Artificial intelligence may assist in the evaluation of COVID-19 instances. For instance, Infravision, a start-up, uses deep learning systems for medical imaging to detect COVID-19 cases more quickly by identifying intricate lung features. 54 In addition, blockchain technology is a unique shared method for storing, verifying, and accepting data as well as carrying out a series of transactions.55 In developed settings (to avoid crippling and straining public health ability or universal health care infrastructure) and in emerging, resource-limited situations, a low-cost blockchain and AI-coupled self-testing and monitoring system has been suggested for handling. It has a high degree of protection and allows for the implementation of patient-centered healthcare facilities, improved public health monitoring, outbreak control, and quick and efficient decision-making.56 57 A method based on an AI algorithm is proposed by Rao and Vazquez (2020) that enables the quick identification of infected cases as well as risk assessment and evaluation based on symptoms and signs connected to the novel corona virus. A web-based or mobile poll would be used to do this. The algorithm can also send alerts to clinics or telemedicine departments asking for health checks and case confirmation based on the responses.
AI Facilitates Pattern Recognition in Incoming Data
AI may help in the strategic application of analytics in addition to enabling more efficient operations and offering insights into the lifestyle and health of the patient.58 AI is developed through researching algorithms and making difficult choices. The computer makes the diagnosis and suggests treatment alternatives as well as the length of stay when a patient enters a hospital with a life-threatening illness.59 The computer will be accurate since it looks at a lot of data. Specifically, it will make use of Google's powerful computing capabilities. Using artificial intelligence, Hanover, a Microsoft AI product, harvests healthcare data. With the use of this technology, machines may memorize medical research papers and find viable treatments for a range of illnesses. It will assess the patient's health history and personal information, correlate it with data from medical research journals, and suggest treatments.60 Blockchain technology is used by pharmaceutical businesses to assess and guarantee the safety and accountability of the drugs they sell. It guarantees that the information is captured as it occurs and is then analogously disseminated to provide better patient outcomes. The WHO believes that approximately one million individuals each year pass away as a result of using fake medications. By giving each drug a distinct serial number, pharmaceutical companies like Big Pharma can use blockchain technology to confirm that there are no fake medications on the market. As soon as a medicine entered the system, it would indicate if it had an erroneous number. In addition, this can prevent, among other things, the exchange of medications or their improper production. The National Drug Code with Unique Serial Numbers is the method for tracking drugs utilizing unique serialization IDs. Each package of tablets, vials, syringes, etc. is required to have a unique serial number by the DSCSA law in order to identify the drug, the location where it was made, and other information. It was reported that full drug tracking enabled by blockchain benefits Big Pharma.1
Integration of Block Chain and AI in the Storage of Large Healthcare Data
EHR systems may upload data, including medical records, to the blockchain. The predetermined and constrained block size will result in increased computing costs and storage demands if this data is processed directly within the blockchain network. Furthermore, a privacy violation will frequently occur with such data.61 1 The framework is used in the majority of important studies and implementations to manage enormous amounts of safe original information using defensible third parties (like cloud computing) and blockchain for on-chain storage to get around these problems.[1]
Block Chain and AI Integration in Healthcare: Resolving Security and Interoperability Issues
Adding AI skills to the health sector can be advantageous, even though blockchain can guarantee security and integrity. Currently, AI is mostly used to identify anomalies in X-rays and CT scans. AI is capable of performing this duty at least as well as humans, ensuring a greater level of individualized therapy. Due to the significant advantages that may be realized by reducing medical expenses and improving the quality of medical care, experts predict that significant inventions will emerge in the future. In order to increase the more efficient use of patient data, diagnosis accuracy, and delivering better suggestions based on evidence-based research findings, among other things, Google, Microsoft, Apple, and Amazon, as well as a plethora of startups, are aggressively researching AI for medical applications. The development of robotic surgery and the availability of digital counseling through smartphone applications are added to by these apps. [1] By 2026, critical clinical health AI solutions, per Accenture, could save the US healthcare industry $150 billion annually. Healthcare interoperability and protection issues must be addressed by artificial intelligence as it develops. The fusion of complex algorithms, enormous data sets, and effective machines has produced a contemporary method for integrating technologies in healthcare.62
Conclusion
Modern technology and the Internet of Things have considerably accelerated the growth of e-health solutions, opened up access to better medical services, and allowed for patient monitoring from a distance. Because of improvements in health monitoring hardware and the availability of remote diagnostic services, e-health systems have become widely used in recent years. Humans require a high standard of healthcare, and having such a facility is seen as the most important indicator of a nation's overall standard of living. Humans' vital signs are the most important indicators of their health, and as a result, physiological parameters for many diseases can be predicted using vital signs. Significant ground-breaking initiatives have been made to promote the use of blockchain technology to enable billions of IoT devices targeted at providing secure, open, and useful medical aid to patients and healthcare professionals.