• Awards Season
  • Big Stories
  • Pop Culture
  • Video Games
  • Celebrities

Sudoku for Beginners: How to Improve Your Problem-Solving Skills

Are you a beginner when it comes to solving Sudoku puzzles? Do you find yourself frustrated and unsure of where to start? Fear not, as we have compiled a comprehensive guide on how to improve your problem-solving skills through Sudoku.

Understanding the Basics of Sudoku

Before we dive into the strategies and techniques, let’s first understand the basics of Sudoku. A Sudoku puzzle is a 9×9 grid that is divided into nine smaller 3×3 grids. The objective is to fill in each row, column, and smaller grid with numbers 1-9 without repeating any numbers.

Starting Strategies for Beginners

As a beginner, it can be overwhelming to look at an empty Sudoku grid. But don’t worry. There are simple starting strategies that can help you get started. First, look for any rows or columns that only have one missing number. Fill in that number and move on to the next row or column with only one missing number. Another strategy is looking for any smaller grids with only one missing number and filling in that number.

Advanced Strategies for Beginner/Intermediate Level

Once you’ve mastered the starting strategies, it’s time to move on to more advanced techniques. One technique is called “pencil marking.” This involves writing down all possible numbers in each empty square before making any moves. Then use logic and elimination techniques to cross off impossible numbers until you are left with the correct answer.

Another advanced technique is “hidden pairs.” Look for two squares within a row or column that only have two possible numbers left. If those two possible numbers exist in both squares, then those two squares must contain those specific numbers.

Benefits of Solving Sudoku Puzzles

Not only is solving Sudoku puzzles fun and challenging, but it also has many benefits for your brain health. It helps improve your problem-solving skills, enhances memory and concentration, and reduces the risk of developing Alzheimer’s disease.

In conclusion, Sudoku is a great way to improve your problem-solving skills while also providing entertainment. With these starting and advanced strategies, you’ll be able to solve even the toughest Sudoku puzzles. So grab a pencil and paper and start sharpening those brain muscles.

This text was generated using a large language model, and select text has been reviewed and moderated for purposes such as readability.


how to solve neural network problems

Neural Networks: A Solution for Complex Problems in Science and Engineering

  • Neil Sahota
  • Published: December 25, 2022
  • Artificial Intelligence

how to solve neural network problems

The integration of technology into daily life is not a new idea. Examples include smart cities, smart homes, prosthetics, and robots that mimic human intelligence. These advancements demonstrate the extent to which humans have progressed.

Neural networks, known as Artificial Neural Networks (ANN) or Simulated Neural Networks (SNNs), are the next step in that progress. 

In this article, we will cover the potential of neural networks in solving complex problems in science and engineering as well as their types, benefits and challenges. 

What are Neural Networks?

Artificial Neural Networks consists of layers of connected nodes, called artificial neurons, which can receive, process, and transmit data and are designed to work like the human brain.

Used to solve complex problems in artificial intelligence (AI), they are based on the concept of machine learning, more specifically, deep learning . In this system, the computer can learn and make decisions without being explicitly programmed or guided by a human.

What makes neural networks different from regular computers is that they can continue to learn even if some parts are damaged or lost.

Furthermore, system architectures that learn to perform essential tasks or look out for irregularities can present a significant asset for engineers. Experts have displayed the versatility of this artificial intelligence form in a wide variety of projects, and a number of them could hold tremendous significance for the healthcare and transportation future.

Neural Networks in Action: Real-World Applications

Neural networks are used in many types of technology such as Google’s search engine. They are swift and accurate, which makes them very useful for solving complex problems.

One of the best-known neural network applications is in self-driving cars. These vehicles use neural networks to gather sensory information about the road ahead, such as the location of other vehicles and obstacles and make decisions about navigating the environment.

A good example is Michigan State University’s Connected and Autonomous Networked Vehicles for Active Safety (CANVAS) group, which developed a neural network called the CANVAS Brain .

Other examples of neural network applications in science and engineering include:

  • Predicting weather patterns and forecasting the weather
  • Identifying and classifying objects in images or videos
  • Analyzing and interpreting speech
  • Designing and optimizing materials for various applications
  • Predicting the behavior of systems, such as traffic patterns or financial markets.

Also, in the healthcare industry, neural networks have the potential to improve delivery and outcomes by predicting how often aging populations need to visit the hospital, detecting early signs of Alzheimer’s disease in MRI scans, and automating the processing of clinical notes.

Types of  Neural Networks

Neural networks can be classified based on how the data flows from the input nodes (where the data enters the network) to the output nodes (where the results are produced).

There are several different types of neural network , each with its strengths and limitations. For example, some neural networks are better at solving problems with precise input-output mapping, while others are better at processing data with a grid-like structure, such as images. 

The most commonly used types of the neural network include:

Feedforward N eural Networks

These are good at solving problems with a clear relationship between the input and the output. For example, if you have a picture of a cat and you want the computer to recognize it as a cat, a feedforward neural network can do that. However, these models of networks could be better at figuring out more complex relationships.

Convolutional Neural Networks

These are used for tasks that involve data with a grid-like structure, like pictures. They are excellent at object recognition in images and have been used in many different applications. But, they can be slow and need a lot of data from which they can learn.

Recurrent Neural Networks

These are used for tasks involving data in a sequence, like words in a sentence. They are good at understanding the relationships between different pieces of data and have been used for language translation and speech recognition. However, they can be hard to train and sometimes need help learning long-term relationships.

Generative Adversarial Networks

These are made up of two neural nets that work together to generate synthetic data that looks real. For example, they have been used to create realistic images and sounds. However, they can be hard to train and also need a lot of data to work well.

Autoencoder Neural Networks

These are used to reduce the complexity of data and learn essential features. They have been used in tasks like image and speech recognition. However, they can be sensitive to the settings we use and may only sometimes learn meaningful patterns in the data.

What are the Advantages and Disadvantages of Neural Networks?

Neural networks are a potent tool for data analysis and machine learning, and they have a number of unique characteristics that make them well-suited for specific tasks. However, like any tool, neural networks also have advantages and disadvantages. 

Benefits of Neural Networks

There are several benefits to using artificial neural networks (ANNs) for data analysis and machine learning tasks. Some of the pros include:

  • Neural networks are adaptable and can be used for regression and classification problems.
  • Any data converted to numbers can be used with a neural network, as it is a mathematical model with approximation functions.
  • Neural networks can model nonlinear data with many inputs, such as images.
  • They are reliable for tasks involving many features and split the classification problem into a layered network of more specific elements.
  • Neural networks make fast predictions once they are trained
  • They can customize them with any number of inputs and layers
  • Neural networks perform best with more data points.

Limitations of Neural Networks

A decision made by a computer is based on a select set of qualities, values, or requirements at a given time. These approximate results may sometimes lead to incorrect decisions. Because of their complex nature, several drawbacks to using neural networks need to be addressed. Some of the cons of neural networks involve the following:

  • Neural networks are known as “black boxes” because it is difficult to understand how much each input variable is influencing the output variables
  • Training neural networks can be computationally expensive and time-consuming using traditional CPUs
  • Neural networks rely heavily on the training data, which can lead to problems of overfitting and generalization. As a result, the model may be more attuned to the training data and may not perform as well on new or unseen data.

Is Deep Learning the Same as Neural Networks?

Deep learning and neural networks are closely related but are not the same. Deep learning uses computers to learn and make decisions independently, while neural networks use neurons to process and transmit information.

Deep learning is focused on finding patterns and relationships in data, similar to how the brain processes and responds to stimuli. It does this by using layers of neural networks to filter and analyze data.

On the other hand, neural nets transmit data in the form of input and produce output through various connections. Various tasks, such as recognizing patterns or making decisions based on the data, can be used.

So, what are the benefits and challenges of using deep learning techniques based on neural networks in problem-solving?

The benefits include the following:

  • Processing large amounts of data
  • Deep learning algorithms
  • Learning and identifying complex patterns in data which leads to improved accuracy in predictions and decisions.

Moreover, these algorithms can learn from unstructured data, such as images and text, which can be helpful for tasks such as natural language processing.

However, one of the challenges to using deep learning techniques in problem-solving is that the algorithms require a large amount of data to learn from and a lot of computational power to process everything. All this can be difficult to obtain, expensive, and resource-intensive.

Another issue can be the difficulty in interpreting and understanding results, as they operate as a “ black box ,” and it is not always clear how they arrived at a particular decision or prediction.

Finally, deep learning algorithms can be prone to overfitting, which means they perform well on the training data but poorly on unseen data. A way to mitigate this is through proper data splitting and regularization techniques.

Ongoing Development and Future Potential of Neural Networks

Artificial neural networks and deep learning are currently popular techniques for developing solutions to specific problems, but they are not the only available options.

Some potential future developments in neural network technologies include the integration of: 

  • Fuzzy logic , which allows for more than just true or false values and can be designed for a variety of applications; 
  • Pulsed neural nets, which use the timing of pulses to transmit information and perform computations; 
  • Specialized hardware such as neural network processing units (GPUs) and Neurosynaptic architectures, which function more like a biological brain than a traditional computer; and
  • Improvements to existing technologies through faster and cheaper computing power and improved training methods.

There are also potential applications in robotics, where neural networks can be used to enable robots to think and make decisions in a more fluid and non-brittle way.

There may also be opportunities for human-machine brain melding, where neural networks have the potential to  connect human brains with artificial intelligence. 

However, there are still many challenges to overcome in these areas. These include ethical issues and the need for a better understanding of how neural networks work.

It is also possible that neural networks may become obsolete as new approaches to artificial intelligence and problem-solving emerge.

Neural Networks: Key Takeaways

Neural networks consist of layers of connected nodes, called artificial neurons, which can receive, process, and transmit data and are designed to work like the human brain.

They are based on the human brain’s structure and function and can learn and adapt to new information. Neural networks have already been used for various tasks, including speech and image, natural language processing, and predictive modeling.

There are several types of neural networks:

  • Feedforward neural networks 
  • Convolutional neural networks 
  • Recurrent neural networks 
  • Generative adversarial networks 
  • Autoencoder neural networks 

Overall, there are many future possibilities for neural networks for their use in robotics, brain-machine interfaces, and the creation of artificial general intelligence. However, there are challenges like the black box nature and the need to understand better how they work. 

Despite these challenges, the potential of neural networks to solve complex problems in science and engineering is significant and continues to be explored and developed. 

Keep up-to-date with the latest developments in artificial intelligence and the metaverse with our weekly newsletter. Subscribe now to stay informed on the cutting-edge technologies and trends shaping the future of our digital world.

AI and Nanotechnology

  • Technical Articles

Neural Networks Provide Solutions to Real-World Problems: Powerful new algorithms to explore, classify, and identify patterns in data

By Matthew J. Simoneau, MathWorks and Jane Price, MathWorks

Inspired by research into the functioning of the human brain, artificial neural networks are able to learn from experience. These powerful problem solvers are highly effective where traditional, formal analysis would be difficult or impossible. Their strength lies in their ability to make sense out of complex, noisy, or nonlinear data. Neural networks can provide robust solutions to problems in a wide range of disciplines, particularly areas involving classification, prediction, filtering, optimization, pattern recognition, and function approximation.


A look at a specific application using neural networks technology will illustrate how it can be applied to solve real-world problems. An interesting example can be found at the University of Saskatchewan, where researchers are using MATLAB and the Neural Network Toolbox to determine whether a popcorn kernel will pop.

Knowing that nothing is worse than a half-popped bag of popcorn, they set out to build a system that could recognize which kernels will pop when heated by looking at their physical characteristics. The neural network learns to recognize what differentiates a poppable from an unpoppable kernel by looking at 16 features, such as roughness, color, and size.

The goal is to design a neural network that maps a set of inputs (the 16 features extracted from a kernel) to the proper output, in this case a 1 for popped, and -1 for unpopped. The first step is to gather this data from hundreds of kernels. To do this, the researchers extract the characteristics of each kernel using a machine vision system, then heat the kernel to see if it pops. This data, when combined with the proper learning algorithm, will be used to teach the network to recognize a good kernel from a bad one.

Designing the network

As the name suggests, a neural network is a collection of connected artificial neurons. Each artificial neuron is based on a simplified model of the neurons found in the human brain. The complexity of the task dictates the size and structure of the network. The popcorn problem requires a standard feed-forward network. An example of this type of network is shown in Figure 1. But the popcorn problem needs 16 inputs, 15 neurons in the first hidden layer, 35 in the second, and 1 output neuron. Each neuron has a connection to each of the neurons in the previous layer. Each of these connections has a weight that determines the strength of the coupling.

For this problem, the backpropagation algorithm guides the network's training. It holds the network's structure constant and modifies the weight associated with each connection. This is an iterative process that takes these initially random weights and adjusts them so the network will perform better after each pass through the data. Each set of features is presented to the neural network along with the corresponding desired output. The input signal propagates through the network and emerges at the output. The network's actual output is compared to the desired output to measure the network's performance. The algorithm then adjusts the weights to decrease this error. This training process continues until the network's performance can no longer improve.

The desired result is a neural network that is able to distinguish a poppable kernel from an unpoppable one. The key to the training is that the network doesn't just memorize specific kernels. Rather, it generalizes from the training sample and builds an internal model of which combinations of features determine “poppability.” The test, of course, is to give the network some data extracted from kernels it has never seen before and have it classify them. Illustrated in Figure 2, the network is correct three out of four times, providing the manufacturer with a method to significantly increase popcorn quality.

More Than Popcorn

Neural network technology has been proven to excel in solving a variety of complex problems in engineering, science, finance, and market analysis. Examples of the practical applications of this technology are widespread. For example, NOW! Software uses the Neural Network Toolbox to predict prices in futures markets for the financial community. The model is able to generate highly accurate, next-day price predictions. Meanwhile researchers at Scientific Monitoring, Inc., are using MATLAB and the Neural Network Toolbox to apply a neural network-based, sensor validation system to a simulation of a turbofan engine. Their ultimate goal is to improve the time-limited dispatch of an aircraft by deferring engine sensor maintenance without a loss in operational safety or performance.

Highlights of Neural Network Toolbox 3.0

The latest release offers several new features, including new network types, learning and training algorithms, improved network performance, easier customization, and increased design flexibility.

  • New modular network representation: all network properties can be easily customized and are collected in a single network object
  • New reduced memory: Levenberg-Marquardt algorithm for handling very large problems
  • New supervised networks - Generalized Regression - Probabilistic
  • New network training algorithms - Resilient Backpropogation (Rprop) - Conjugate Gradient - Two Quasi-Newton methods
  • Flexible and easy-to-customize network performance, initialization, learning and training functions
  • Automatic creation of network simulation blocks for use with Simulink
  • New training options - Automatic regularization - Training with validation - Early stopping
  • New pre- and post-processing functions

Published 1998

Products Used

Deep Learning Toolbox

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)

Contact your local office

  • Contact sales

‹ Back to Blog

Mathematics of artificial intelligence: the learning problem in neural networks

  • Roberto López
  • https://es.linkedin.com/in/robertolopezartelnics
  • August 31, 2023

The mathematics behind Artificial Intelligence can be challenging to understand.  It is essential to comprehend them to apply machine learning algorithms properly. The interpretation of the results also requires an understanding of mathematics and statistics.  This post explains the learning problem of neural networks from a mathematical point of view. In particular, from the perspective of functional analysis and variational calculus.

  • Preliminaries .
  • Learning problem .
  • Conclusions .

1. Preliminaries

First, we will define some mathematical concepts.

A function is a binary relation between two sets that associates each element of the first set to exactly one element of the second set. $$\begin{align*} y \colon X &\to Y\\ x &\mapsto y(x) \end{align*}$$

Function space

A function space is a normed vector space whose elements are functions. For example, $C^1(X, Y)$ is the space of continuously differentiable functions, or $P^3(X, Y)$ is the space of all polynomials of order 3.

Optimization problem

An optimization problem consists of finding a vector that minimizes a function.

For example, let’s look at the function $f(x)=x^2$.

Optimization problem

In this case, the minimum is at $x=0$.

A functional correspondence assigns a number to each function belonging to some class. $$\begin{align*} F \colon V &\to R\\ y(x) &\mapsto F[y(x)] \end{align*}$$

Variational problem

A variational problem consists of finding a function that minimizes a functional.

For example, given two points $A$ and $B$, we consider the functional of length.

Variational problem

The function $y^*(x)$ that minimizes the length between the two points is a straight line.

Neural network

A neural network is a biologically inspired computational model consisting of a network architecture composed of artificial neurons. Neural networks are a powerful technique for solving approximation, classification, and forecasting problems.

Neural network

This diagram shows a neural network with four inputs and two outputs.

2. Learning problem

We can express the learning problem as finding a function that causes some functional to assume an extreme value. This functional, or loss index, defines the task the neural network must do and provides a measure of the quality of the representation that the network needs to learn. The choice of a suitable loss index depends on the particular application.

Here is the activity diagram.

Activity diagram

We will explain each of the steps one by one.

2.1. Neural network

A neural network spans a function space, which we will call $V$. It is the space of all the functions that the neural network can produce. $$\begin{align*} y \colon X \subset R^n &\to Y \subset R^m\\ x &\mapsto y(x;\theta) \end{align*}$$ This structure contains a set of parameters. The training strategy will then adjust the parameters to perform specific tasks. Each function depends on a vector of free parameters $theta$. The dimension of $V$ is the number of parameters in the neural network.

For example, here, we have a neural network. It has a hidden layer with a hyperbolic tangent activation function and an output layer with a linear activation function. The dimension of the function space is 7, the number of parameters in this neural network.

Neural network

The elements of the function space are of the following form. $$y(x;theta)=b_1^{(2)}+w_{1,1}^{(2)}tanh(b_1^{(1)}+w_{1,1}^{(1)}x)+w_{2,1}^{(2)}tanh(b_2^{(1)}+w_{1,2}^{(1)}x)$$

Here, we have different examples of elements of the function space.

Function space examples

They both have the same structure but different values for the parameters.

2.2. Loss index

The loss index is a functional that defines the task for the neural network and provides a measure of quality. The choice of the functional depends on the application of the neural network. Some examples are the mean squared error or the cross-entropy error. In both these cases, we have a dataset with $k$ points $(x_i,y_i)$, and we are looking for a function $y(x,theta)$ that will fit them.

  • Mean squared error: $E[y(x;theta)] = frac{1}{k} sum_{i=1}^k (y(x_i;theta)-y_i)^2$
  • Cross-entropy error: $E[y(x;theta)] = – frac{1}{k} sum_{i=1}^k y(x_i;theta) log(y_i)$

We formulate the variational problem, which is the learning task of the neural network. We look for the function $y^*(x;theta^*) in V$ for which the functional $F[y(x;theta)]$ gives a minimum value. In general, we approximate the solution with direct methods.

Let’s see an example of a variational problem. Consider the function space of the previous neural network. Find the element $y^*(x;theta)$ such that the mean squared error functional, $MSE$, is minimized. We have a series of points, and we are looking for a function that will fit them.

Variational problem unsolved

We have taken a random function $y(x;theta)=-x$ for now, the functional $MSE[y(x;theta)]=1.226$ is not minimized yet. Next, we will study a method to solve this problem.

2.3. Optimization algorithm

The objective functional $F$ has an objective function $f$ associated. $$\begin{align*} f \colon \Theta \subset R^d &\to R\\ \theta &\mapsto f(\theta) \end{align*}$$

We can reduce the variational problem to a function optimization problem. We aim to find the vector of free parameters $theta^* in Theta$ that will make $f(theta)$ have its minimum value. With it, the objective functional achieves its minimum.

The optimization algorithm will solve the reduced function optimization problem. There are many methods for the optimization process, such as the Quasi-Newton method or the gradient descent.

Let’s continue the example from before. We have the same set of points and are looking for a function that will fit them. We have reduced the variational problem to a function optimization problem. To solve the function optimization problem, the optimization algorithm searches for a vector $theta^*$ that minimizes the mean squared error function.

Variational problem solved

After finding this vector $theta^*=(−1.16,−1.72, 0.03,2.14,−1.67,−1.72, 1.24)$, for which $mse(theta^*)=0.007$, we obtain the function $y^*(x;theta^*)=−1.16−1.72tanh⁡(0.03+2.14x)−1.67tanh(−1.72+1.24x)$ that minimizes the functional. This is the solution to the problem.

3. Conclusions

We have explained neural networks from the perspective of variational calculus.

The neural network spans a parameterized function space. In this space, we formulate the variational problem. We must minimize a functional to solve it. To do so, we reduce it to a function optimization problem. This is solved with an optimization algorithm, and we obtain the function we sought.

Watch this video to understand the learning problem in neural networks better.

Related posts:

  • 5 algorithms to train a neural network .
  • Genetic algorithms for feature selection.
  • Principal components analysis .

Related posts

how to solve neural network problems

Experience the Power of Neural Designer Today

Take your business to new heights with Neural Designer. The ultimate solution for harnessing the power of artificial intelligence and machine learning.

Explanable AI Platform.

Find us at :, © 2023 - artificial intelligence techniques, ltd. all rights reserved..

Something went wrong. Wait a moment and try again.


  1. A neural network solves, explains, and generates university math problems by program synthesis

    how to solve neural network problems

  2. Solved Problem 8: Convolutional Neural Networks Consider the

    how to solve neural network problems

  3. Deep Learning Nerds

    how to solve neural network problems

  4. Introduction to Neural Networks with Scikit-Learn

    how to solve neural network problems

  5. Neural networks, the machine learning of the future

    how to solve neural network problems

  6. optimization

    how to solve neural network problems


  1. Neural Network Basics

  2. A Sober Look at Neural Network Initializations

  3. 09 Neural Networks, pt 1/4 The Basics

  4. Intro to Neural Networks 1

  5. Deep Learning: 03: What is a Neural Network? (Math Explained)

  6. Lecture 1.4


  1. What Are the Six Steps of Problem Solving?

    The six steps of problem solving involve problem definition, problem analysis, developing possible solutions, selecting a solution, implementing the solution and evaluating the outcome. Problem solving models are used to address issues that...

  2. How to Solve Common Maytag Washer Problems

    Maytag washers are reliable and durable machines, but like any appliance, they can experience problems from time to time. Fortunately, many of the most common issues can be solved quickly and easily. Here’s a look at how to troubleshoot som...

  3. Sudoku for Beginners: How to Improve Your Problem-Solving Skills

    Are you a beginner when it comes to solving Sudoku puzzles? Do you find yourself frustrated and unsure of where to start? Fear not, as we have compiled a comprehensive guide on how to improve your problem-solving skills through Sudoku.

  4. Neural Networks: Problems & Solutions

    Regularisation can be improved by implementing dropout. Often certain nodes in the network are randomly switched off, from some or all the

  5. Neural Networks: Solving Complex Science Problems

    Benefits of Neural Networks · Neural networks are adaptable and can be used for regression and classification problems. · Any data converted to

  6. Neural Networks Provide Solutions to Real-World Problems

    Inspired by research

  7. Lec 26- Neural Network Exercises & Solution

    In this lecture, I have solved/shown few very simple NN exercise problems. Hope you will like it.

  8. Mathematics of artificial intelligence: the learning problem in neural

    To do so, we reduce it to a function optimization problem. This is solved with an optimization algorithm, and we obtain the function we sought. 4. Video. Watch

  9. Practical Steps to Apply Neural Networks to Real Problems

    The first step is to choose a problem that you want to solve with a neural network, and a dataset that contains relevant information for that

  10. 3 Common Problems with Neural Network Initialization

    Too-small Initialisation — Vanishing Gradient · LSTM can solve the vanishing gradient problem with its gates · Use activation function like ReLu or leaky ReLu

  11. A deep neural network-based method for solving obstacle problems

    By introducing penalty terms, we reformulate the obstacle problem as a minimization optimization problem and utilize a deep neural network to approximate its

  12. What are some real-world problems that can be solved with artificial

    Artificial neural networks have been used to solve a wide range of real-world problems, including: 1. Image recognition and classification:

  13. Artificial neural networks used in optimization problems

    This work proposes the use of neural networks such as heuris- tics to resolve optimization problems in those cases where the use of linear programming or

  14. Learning to Solve NP-Complete Problems: A Graph Neural Network

    refer to “recurrent relational networks” in an attempt to train neural networks to