Choose from 2 available questions to practice AI, arrays, and linear algebra concepts.
The XOR (exclusive OR) gate presents a classic problem in neural network research because it cannot be solved by a single-layer perceptron. This is due to the non-linear separability of the XOR function. The XOR function takes two binary inputs (0 or 1) and outputs 1 if exactly one of the inputs is 1, and 0 otherwise
1. Build a sequential model with one hidden layer containing 32 neurons and a relu activation function and another hidden layer with 16 neurons and a relu activation function. 2. Add an output layer with the correct number of neurons and a sigmoid activation function for binary classification. 3. Compile the model using binaryCrossentropy for loss. 4. Train the model on the provided XOR Dataset (inputs `X` and outputs `Y`). 5. Evaluate the model on the `to_predict` tensor.
Tensor Data -> Shape: [1, 1] | Values: [[1]]
You've built the structure of a model! Now, let's understand its parts. The relu activation function changes all negative values in a tensor to 0.
Examine your model and understand how relu activation works. If a tensor [-5, 0.5, 3, -1] was processed by relu, what would the output be?
Model layers: Dense(32, relu). ReLU output: [0, 0.5, 3, 0]