How do we take the mean of a mathematical function using statistics?

I am unfamiliar in statistics. My knowledge is in pure mathematics.

Suppose $n\in\mathbb{N}$ where $A$ is an arbitrary set such that $A\subseteq\mathbb{R}^{n}$ and function $f:A\to\mathbb{R}$. I wish for someone to desribe in rigorous detail, how one uses methods in statistics to find the mean of:

  1. $f$ defined on a finite $A$
  2. $f$ defined on an uncountable Lebesgue measurable $A$ with positive measure
  3. Functions defined on Lebesgue measurable $A$ with zero or infinite measure
I heard with the first two points, one can use the uniform probability measure as desribed in this link, but for the third point one does not use the uniform measure nor do we use the normalized Hausdorff measure. Instead one uses the cumulative distribution function for the image of $f$ under $A$ or $f[A]=\left\{f(x):x\in A\right\}$. For instance, can one find the mean of the Cantor function.

Lastly, can we show using statistics "almost all" functions (highlighted here and here) do not have a mean.
  • Mathe Mathe
    0

    I would suggest increasing the bounty because this problem contains several (five) questions.

Answer

Answers can only be viewed under the following conditions:
  1. The questioner was satisfied with and accepted the answer, or
  2. The answer was evaluated as being 100% correct by the judge.
View the answer

1 Attachment

Mathe Mathe
3.5K
  • Is it possible to add the books or links you got the definitions from? (I should have added this to my question). It appears some of the answers are mathematical rather than statistical (except for the mean of f defined on A with infinite measure). Is it true with the mean of functions matheamatics and statistics are almost the same. Finally what do you think about this link: https://mathoverflow.net/a/235609/87856

    • Mathe Mathe
      0

      Well, modern day statistics is based on the framework of measure theory. Moreover, at an advanced level, the barriers between Statistics and Mathematics really vanish. Also, because of the intrinsic complex nature of your questions, I tried to answer meeting your questions with the same depth and complexity. In more 'down to earth' Statistics, one does not consider 'almost every' functions or sets of zero measure at all. Specially the latter, since they would represent events of probability 0.

    • Mathe Mathe
      0

      The link you reference provides a way to define integrals over self-similar (fractal like) sets, exploiting said self-similarity (and introducing weights p_i). This approach is of course less general than integrating over a general set of measure zero (say, the rational numbers).

  • Mathe Mathe
    0

    References for mean values and measure theory: 'Probability and measure' by Patrick Billingsley, 1976, 'A User's Guide to Measure Theoretic Probability', by David Pollard, 2001, 'Real Analysis and Probability' by R.M. Dudley

  • I accepted your answer but I have trouble finding the pages that mention these definitions. In the first book I see more about expected value than the mean of a function. What about the pages in the other books?

  • Mathe Mathe
    0

    Well, expected value and mean of a function are pretty much the same concept. A mean will always depend on a measure, and when this measure is finite (not necessarily one), we just divide by the total measure to normalize. When the measure totals 1, it is just a probability measure so there is no need to normalize.

The answer is accepted.
Join Matchmaticians Affiliate Marketing Program to earn up to a 50% commission on every question that your affiliated users ask or answer.