Universalising Leading Healthcare Budgeting Systems by Applying Supervised Machine Learning Through Integrating Recurrent Neural Networks

 

Abstract:

Globally, inadequate healthcare funding has catalysed severe effects, demonstrating the extensive issues raised by inefficient spending. Consequently, this study\’s principal objective is to universalise leading health care systems by examining different countries\’ data, such as population and number of hospital beds, with linear regression analysis. This research focuses on extrapolating the funding principles of these systems to less advantaged countries, emulating the accurate funding distribution of more successful systems, and determining the optimal computational architecture to carry out such analysis. We used recurrent neural networks (RNNs), which hold an implicit internal memory, to estimate the amount spent as a percentage of the government\’s decided budget on particular areas. The RNN predicted healthcare funding with a mean squared error loss of 0.113 (to 3 significant figures).

Keywords: Machine learning , Recurrent neural networks, Healthcare Funding , Data Science

Introduction:

This study revolves around utilising supervised machine learning to predict healthcare funding that areas should receive. The field of machine learning, in its simplest form, acts as a manner of data analysis, taking data and answers to learn a set of rules for mapping these to each other. Supervised learning, a subset of machine learning, relies upon receiving examples of input and output variables and consequently discovering a function relating these [1]. This procedure is described below in a series of steps, and Figure 1 demonstrates this concept visually.

Figure 1 – Diagram displaying the process of supervised learning with the example of classifying fruits [2]

 

  1. Data compilation – Data is first compiled before the machine learning model carries out this process. 
  2. Training phase – Subsequently, the examples\’ input variables are supplied to the model, which it works from to generate a prediction.
  3. Test phase – The model is tested on unseen data, provided as a CSV file to the program.
  4. Calculating metrics – Loss is calculated based on how far this prediction was from the expected output variable. 
  5. Epochs – The model carries out this process several times, depending on the number of epochs selected for it to run. An epoch refers to a single cycle of the described procedure.

Supervised learning is divided further into the subfields of classification and regression. Classification aims to characterise pieces of data such as images into specific categories. For example, a classification model may be used to determine what fruit an image represents [3]. Regression analysis utilises several input variables, known as features, to predict an output variable, referred to as the label. An example of regression analysis is using contextual information on a house\’s position to estimate its price [4]. This study employs regression analysis to determine the healthcare funding an area in a country should receive, using factors like the number of people in the area.

This study used RNNs to implement regression analysis. Figure 2 demonstrates that RNNs differ significantly from “vanilla” feedforward neural networks since their feedback signals are sent between all of their neurons, forming an implicit “internal memory”. RNNs then utilise this memory to comprehend data passed as a sequence rather than isolated numeric value inputs. This naturally enables them to consider temporal factors when predicting an output variable, significantly better than other neural network frameworks [5]. They achieve this through timesteps (see Figure 3), which are the amount of data the network can retain.


Figure 2 – Diagram observing how all nodes are connected in an RNN [6]

Figure 3 – Diagram showcasing the idea of a timestep in RNNs [7]

In Figure 3, refers to the input value into the model and is the output. It demonstrates how the RNN retains the output of each previous data as a time step by continually passing data to the next prediction. This RNN incorporates the tanh activation function, which is a nonlinear function that helps to identify complex patterns within the dataset. Also, forward propagation is utilised because it calculates and stores intermediate values for the RNN between the input and final output. Furthermore, backwards propagation (BPTT), which is an algorithm that updates the model’s weights and biases for computing the gradients, is adopted.

However, there could be an issue with this approach, namely the exploding gradients and vanishing gradients problem [8]. These refer to error gradients calculated during the training phase of an RNN, which should aid the model in finding a function for mapping the features to the labels. These error gradients may accumulate, leading to very large or very small gradients for the model to act on. In this situation the model updates itself inappropriately, distorting the training phase so an optimal function will not be determined. However, using a Long Short Term Memory (LSTM) model avoids this problem since an LSTM utilises three gates: the forget gate, input gate, and output gates. These better control the gradient values at each time step and adjust its parameters appropriately [9]. Also, the model\’s overall additive structure enables it to determine a parameter update at any point, thus preventing the gradient from either exploding or vanishing. 

Due to the temporal nature of healthcare funding estimation as data from previous years influences current decision making, this study integrated an LSTM. Furthermore, this research compares an LSTM model\’s results for this prediction with a standard linear regression model.

Normalisation is often necessary as a form of data preprocessing before passing a dataset to a model. By definition, it aims to scale the values in a numeric dataset, placing all numbers between zero and one [10]. Therefore, since numbers have a smaller gap within the training phase, it makes it easier for the model to train [11]. We normalized our training data to optimise this phase in the overall process.

Within supervised learning, the data passed to the model is often regarded as of most significance. Therefore, this study initially laid out the variables that would need to be included within a dataset used. These consisted of the two inputs: population and hospital beds, alongside the label: funding. All variables were required to be formatted as percentages of the country\’s total, avoiding issues such as different currency values and therefore, enabling our project to, theoretically, be applied globally. After establishing what variables were required, the model\’s training data was compiled solely from one source [12]. This was due to the global scarcity of the desired information publicly. The lack of data had the potential to affect this study’s results since it could cause the model to only identify patterns within limited values and be unable to extrapolate a principal successfully outside of the data passed. However, after formatting our data as percentages, as demonstrated in Figure 4, there was sufficient evidence for our model to train successfully. Furthermore, in addition to the data used for the model to train, since Canada consists of thirteen territories, the last territory within the dataset was kept as a test case. Therefore, the model would not train from this data but instead would be tested after training to assess its results using this data.

Population (%)

Hospital Beds (%)

Funding Allocated (%)

100

100

100

1.54

2.93

2.62

1.54

3.27

2.84

1.54

3.39

2.89

1.52

3.24

2.74

1.49

3.27

2.63

1.48

3.24

2.83

1.48

3.27

2.75

1.45

3.23

2.65

1.42

3.23

2.62

0.38

0.81

0.55

0.38

0.78

0.57

0.37

0.66

0.58

0.38

0.68

0.6

0.38

0.65

0.59

Figure 4 – Table representing the first 15 examples in our dataset.

Literature Review:

Supervised machine learning has widely been recognised as a catalyst to revolutions in many fields globally [13]. For instance, it has been employed to transform the sector of population genetics, where a support vector machine (SVM) achieved an accuracy of 88% upon training on a quantitative dataset [13]. Population genetics is similar to the field of healthcare funding estimation in how both sectors yield ever-increasing datasets. Additionally, despite machine learning being deployed within economics and funding estimation, no exploration has been made into healthcare funding estimation precisely [14]. In light of the widespread ramifications of the recent COVID-19 pandemic and several economic crises that have occurred in past decades, an increased need has developed for establishing an accurate, reliable distribution of the healthcare pecuniary domain. [15] However, by eliminating the use of machine learning, a study has supplied recommendations for how to improve funding distribution without it [16]. Yet the specificity of this study, in concentrating solely on China, leaves much to be answered when we consider the rest of the world [16]. Furthermore, this study examines reducing ethical bias for funding distribution as opposed to employing numerically quantifiable features of areas to determine the funding it should receive [16]. Therefore, much is left to be explored utilising machine learning in this sector, considering quantitative factors as opposed to ethical ones.

Numerous physical, psychological, and emotional benefits have been attributed to marijuana since it was first reported.  The phytocannabinoids, cannabidiol (CBD), and delta-9-tetrahydrocannabinol , the most studied extracts from the cannabis sativa subspecies, include hemp and marijuana. We provide animal/human research data on the current clinical/neurological uses for CBD flower alone or with Δ9-THC, emphasizing its neuroprotective, antiinflammatory, and immunomodulatory benefits when applied to various clinical situations.

Methodology:

At the initial stages of this study, it became evident that it would be necessary to examine several leading health care systems across the globe before constructing the RNN. This would enable a greater understanding of the countries, which data would be extracted from.

Case Studies:

Germany:

Germany adopts a financing system based on comparing medical institutes\’ efficiency, fulfilling pecuniary demands based on their efficacy [18]. This approach led to the adoption of factors affecting performance, such as the number of hospital beds and population.

Switzerland:

The introduction of the National Diagnosis Related Group (DRG)-based hospital payments in Switzerland in 2012 appeared most relevant to this study. The basic principle is that all patients treated by a hospital are divided into DRGs (Diagnosis Related Group), ideally with homogenous resource depletion. Each DRG is designated a specific cost tariff, calculated using data from hospitals in the past, indicating an appropriate value [19]. Apart from Switzerland, 12 European countries, including Germany, utilise a similar concept, with slight variations. Several of these countries are displayed below in Figure 5, which shows their overall payment as well as the year of introduction.

Canada:

Finally, Canada was examined due to its internationally esteemed healthcare system, granting access to every citizen, regardless of age or financial status [17]. It adopts an approach where private healthcare organisations utilise governmental funding, with 70% of Canada\’s healthcare funding supplied by the government in 1993 [18]. As Figure 6 demonstrates, this led to a total health expenditure of approximately 70 billion dollars in 1993 in a constant currency of the 1997 Canadian dollar. Furthermore, the most useful notion from the Canadian healthcare system is its outcomes; in reaping results better than the USA [20]. This potentially further suggests how precise allocation of budget is more important than the overall amount of budget a country has. However, this study recognises that this perhaps can be attributed to alternate factors, such as fluctuations between public and private health care in their efficiency. 


Figure 6 – Graph to showcase the total health expenditure in constant 1997 equivalent currency

RNN

Following the completion of the research on leading health care systems, the RNN was then developed for this study using the TensorFlow package in Python. The model trained from the data outlined in the Data section, explaining that it solely used Canada\’s data due to global scarcity of the information required publicly. However, examination of Germany and Switzerland as case studies benefited this study extensively in determining the use of machine learning to formulate calculated decisions on allocating healthcare funding despite not using these countries’ data. Furthermore, this study also developed an additional model, but a vanilla, feedforward neural network adopting a single dense layer, whereby each neuron receives input from all neurons in the previous layer. This enabled the investigation of this research\’s secondary goal in determining whether RNNs function as a better computational architecture than vanilla neural networks for estimating funding in all sectors by assessing each model’s results.

Model Summary for the RNN:

The summary above shows the RNN model using the TensorFlow module. It involves 3 layers containing normalisation, an LSTM layer (RNN), and a dense layer. Furthermore, a sequential model, which both takes in and outputs a sequence of data, has been employed since the model only has 1 label.

Model Summary for the Vanilla Neural Network:

This summary demonstrates how the vanilla neural network only employs one layer less than the RNN, yet, this has significant effects on the model’s performance. Additionally, the model handles data differently than the RNN, as it is not passed time-series data, but rather receives the data for one year to compute the label.

Graphical User Interface (GUI):

Finally, a graphical user interface (GUI) was developed in the Python package Kivy to allow a user, envisioned as a government, to enter data across several areas. Consequently, the GUI outputs our model\’s predictions across each area with varying visual representations. These consist of several line graphs as well as a heat map and scatter graph to view the distribution of funding generally across all areas. Figures 7 and 8 showcase the opening and secondary screen of this GUI, while Figures 9, 10 and 11 display these visual representations.

Figure 7 – GUI opening screen

Figure 8 – GUI secondary screen

Figure 9 – Funding graph produced from GUI

 

Figure 10 – Heat map produced from GUI with randomly generated data

 

Figure 11 – Scatter graph produced from GUI with randomly generated data

Results:

Firstly, it is necessary to establish that each model\’s results were quantified after training them with ten thousand epochs. This is because as Figure 12 demonstrates, the loss begins to plateau at approximately five hundred epochs visually for the RNN model. However, due to the large scale Figure 12 incorporates, the loss truly stops decreasing at around ten thousand iterations. Similarly, the vanilla neural network plateaus just prior to ten thousand but for accuracy in comparison, the same number of epochs were carried out. Consequently, using the test cases outlined in the Data section, this study assessed each model with a graph highlighting how far the model\’s predictions were from the actual funding. Furthermore, this study adopted the equation to calculate accuracy as a percentage by taking the exponential of the mean value of this result across all test cases. Figure 13 showcases the results of the standard, feedforward neural network, which clearly demonstrates a significant gap between the model\’s estimates compared with the actual funding the test cases received. Furthermore, it exposes a fundamental issue with this model, as it has made estimations of distributing funding less than zero percent to the area, which is mathematically impossible. Therefore, no accuracy could be calculated for this model using the equation this study utilised. Contrastingly, Figure 14 displays the RNN\’s predictions, which are evidently significantly closer to the actual funding. Furthermore, the previous issue of negative funding predictions no longer remains. Hence, an accuracy for this model could be determined and was found to be 95.4%. Finally, additional accuracy metrics were also calculated, such as loss calculated with mean-squared error (MSE), which was 0.113 (to 3 significant figures).

Figure 12 – Graph showing how the number of epochs (x axis) affects the model’s loss (y axis) in the training phase

Figure 13 – Graph showcasing the standard, vanilla neural network’s predictions compared to the actual funding test cases received.

Figure 14 – Graph showing the RNN’s predictions in comparison with the  actual funding test cases received.

Discussion:

The outcomes of this study described in the Results section demonstrate an excellent, accurate model for healthcare funding estimation. These results appear to suggest RNN architecture should be sufficient for real-world application. However, the diversity of test cases used for assessing the RNN architecture hampers this application globally, as all the test cases were extracted from Canada. Despite this, the results strongly encourage the research\’s possibility of using RNNs for a similar or the same purpose. Hence, fundamentally, the results suggest RNNs are superior to standard, vanilla neural networks in funding estimation. This message is extremely significant, as it implies that for temporal healthcare funding data, RNNs should be adopted instead of feedforward neural networks. Therefore, this clearly addresses the original objectives laid out at the initial stages of the study. Yet, in the future, integrating further assessments to quantify the RNN\’s exact performance can determine its true potential and explain key points of this study in greater detail.

Conclusion:

Currently, leading health care systems adopt various methods to distribute funding across their areas optimally. However, less advantaged countries suffer unnecessary ramifications due to inadequate allocation of their funds. This study adopts recurrent neural networks (RNNs) to estimate the funding an area should receive with a supervised learning approach. This prediction process emulates the choices that leading budgeting systems make by training from a dataset compiled from these [12].

This study demonstrates a successful training phase, optimised particularly based on the use of RNNs as opposed to vanilla, feedforward neural networks. These results highlight how the model\’s predictions constantly remained close to the actual funding areas received based on the test cases passed.

Despite our model showcasing excellent, accurate results, several aspects can be improved in the future to enhance the model substantially. For example, taking more features to predict funding, compiling further data to train from, and extracting more diverse data. 

Acknowledgements:

Finally, this study would not have been possible without the continued support of both Mrs. Kaur, working as the STEM Regional Lead in the East Midlands and Mr. Parish from Loughborough Grammar School. Both their continued aid and contributions helped greatly.

References:

  1. Brownlee, Jason. 2021. \”Supervised And Unsupervised Machine Learning Algorithms\”. Machine Learning Mastery. https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/.
  2. Supervised Learning. 2020. Image. https://www.tutorialandexample.com/wp-content/uploads/2020/11/Supervised-Machine-Learning-1.png.
  3. Waseem, Mohammad. n.d. \”Classification In Machine Learning | Classification Algorithms | Edureka\”. Edureka. https://www.edureka.co/blog/classification-in-machine-learning/
  4. Roman, Victor. 2021. \”Supervised Learning: Basics Of Linear Regression\”. Medium. https://towardsdatascience.com/supervised-learning-basics-of-linear-regression-1cbab48d0eba
  5. Lin, T., Horne, B.G. and Giles, C.L., 1998. How embedded memory in recurrent neural network architectures helps learning long-term temporal dependencies. Neural Networks, 11(5), pp.861-868.
  6. Recurrent Neural Network. n.d. Image. https://upload.wikimedia.org/wikipedia/commons/thumb/0/00/Multi-Layer_Neural_Network-Vector-Blank.svg/1200px-Multi-Layer_Neural_Network-Vector-Blank.svg.png.
  7. Morris, Jarlai, Nataliia Nevinchana, and Sarah Tam. 2020. Recurrent Neural Network. Image. https://miro.medium.com/max/4200/0*gnnwlqLZuC8bpZ2-.
  8. Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. \”On the difficulty of training recurrent neural networks.\” In International conference on machine learning, pp. 1310-1318. PMLR, 2013.
  9. Hochreiter, Sepp, and Jürgen Schmidhuber. \”Long short-term memory.\” Neural computation 9, no. 8 (1997): 1735-1780.
  10. Zhang, Zixuan. 2019. \”Understand Data Normalization In Machine Learning\”. Medium. https://towardsdatascience.com/understand-data-normalization-in-machine-learning-8ff3062101f0.
  11. Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \”Layer normalization.\” arXiv preprint arXiv:1607.06450 (2016).
  12. \”Health Spending | CIHI\”. 2020. Cihi.Ca. https://www.cihi.ca/en/health-spending.
  13. Schrider, Daniel R., and Andrew D. Kern. \”Supervised machine learning for population genetics: a new paradigm.\” Trends in Genetics 34, no. 4 (2018): 301-312.
  14. Zhang, Chuqing, Han Zhang, and Xiaoting Hu. \”A contrastive study of machine learning on funding evaluation prediction.\” Ieee Access 7 (2019): 106307-106315.
  15. Ahtonen, Annika. \”Economic governance: helping European healthcare systems to deliver better health and wealth.\” Eur Policy Cent Brussels 2013 (2013): 4.
  16. Chen, Yiyi, Zhou Yin, and Qiong Xie. \”Suggestions to ameliorate the inequity in urban/rural allocation of healthcare resources in China.\” International Journal for Equity in Health 13, no. 1 (2014): 1-6.
  17. Armstrong, Pat, and Hugh Armstrong. About Canada: health care. Fernwood Publishing, 2019.
  18. Wiley, Miriam M., Mary A. Laschober, and Hellen Gelband. \”Hospital financing in seven countries.\” Congress, 1995.
  19. Busato, André, and Georg von Below. \”The implementation of DRG-based hospital reimbursement in Switzerland: A population-based perspective.\” Health research policy and systems 8, no. 1 (2010): 1-6.
  20. Sanmartin, Claudia, Jean-Marie Berthelot, Edward Ng, Kellie Murphy, Debra L. Blackwell, Jane F. Gentleman, Michael E. Martinez, and Catherine M. Simile. \”Comparing health and health care use in Canada and the United States.\” Health affairs 25, no. 4 (2006): 1133-1142.
  21. Graph Showing Canada\’s Total Health Expenditure. n.d. Image. https://upload.wikimedia.org/wikipedia/commons/4/43/Total_health_expenditure_in_constant_1997_dollars.png.

About the authors

Ashish is a year 10 student at Loughborough grammar school, who is particularly passionate about computer science and its applications into the real world. Hence, Ashish wishes to go into medicine in the future, using computer science in combination, to improve our understanding of the human body.

 

 

 

 

 

Anshul is a year 10 student at Loughborough grammar school in Leicestershire, who is particularly passionate about data science and computer science in general. He is very interested in their applications into finance. In the future, Anshul hopes to study economics and delve into data analytics\’ role within this.

1 thought on “Universalising Leading Healthcare Budgeting Systems by Applying Supervised Machine Learning Through Integrating Recurrent Neural Networks”

  1. This is the exact stuff that gets me excited about the future with machine learning. White thanks for this great article. I have a background in accounting and computer science so I was browsing for material that combined the two.

Leave a Comment

Your email address will not be published. Required fields are marked *