Classification problem using one or more Simple Linear Perceptron
In this exercise, we solve the following classification problem below using simple (no hidden layers) perceptrons (one or more) with two neurons at the input layer and a linear activation function.
N.B.0. : the activation function(s) MUST be linear.
N.B.1. : the colors represents whole regions rather than just a point.
N.B.2. : The solution must be done "by hand" without any implementation in R or Python and with as little calculation as possible, even without doing any calculations. The most important thing is to explain the reasoning clearly and logically, and to draw the architecture of the network(s) under consideration.
N.B.3. : Consider the tangent function, it can facilitate the resolution of this problem. (Or you may think of something else, but please try to avoid using the product function)
- accepted
- 209 views
- $50.00
Related Questions
- Let $X$ be a single observation from the density $f(x) = (2θx + 1 − θ)I[0,1](x)$ with $−1≤ θ ≤ 1$. Find the most powerful test of size $α$ and its power
- How do we define this choice function using mathematical notation?
- applied probability
-
For the LP attached , suppose at least 8 oz of chocolate
and at least 9 oz of sugar are required (with other
requirements remaining the same). What is the new optimal
z-value? - Probability maximum value of samples from different distributions
- Average passanger waiting time - probability density function - normal distribution
- Help with statistics
- Currently studying a grad level Statistical Inference course. I'd just like some clarification regarding how to obtain the Rao-Cramer Lower Bound for a statistic.