# https://Buddha-Wisdom.org -- Buddhist Encyclopedia of Buddha Dharma

likely

## Likely

Snippet from Wikipedia: Probability

Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty. The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).

These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.

Snippet from Wikipedia: Likelihood function

The likelihood function (often simply called the likelihood) describes the joint probability of the observed data as a function of the parameters of the chosen statistical model. For each specific parameter value $\theta$ in the parameter space, the likelihood function $p(X|\theta )$ therefore assigns a probabilistic prediction to the observed data $X$. Since it is essentially the product of sampling densities, the likelihood generally encapsulates both the data-generating process as well as the missing-data mechanism that produced the observed sample.

To emphasize that the likelihood is not a probability density function (p.d.f.) of the parameters, it is often written as ${\mathcal {L}}(\theta \mid X)$. In maximum likelihood estimation, the likelihood function is maximized to obtain the specific value ${\hat {\theta }}=\operatorname {argmax} _{\theta \in \Theta }{\mathcal {L}}(\theta \mid X)$, i.e. the value of the parameters of the probabilistic model under which the observed data is most probable (or under which it has highest probability density, in the case of continuous data). Meanwhile in Bayesian statistics, the likelihood function serves as the conduit through which sample information influences $p(\theta \mid X)$, the posterior probability of the parameter.

The case for using likelihood was first made by R. A. Fisher, who believed it to be a self-contained framework for statistical modelling and inference. Later, Barnard and Birnbaum led a school of thought that advocated the likelihood principle, postulating that all relevant information for inference is contained in the likelihood function. In both frequentist and Bayesian statistics, the likelihood function plays a fundamental role.

Snippet from Wikipedia: Central tendency

In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.

Colloquially, measures of central tendency are often called averages. The term central tendency dates from the late 1920s.

The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote "the tendency of quantitative data to cluster around some central value."

The central tendency of a distribution is typically contrasted with its dispersion or variability; dispersion and central tendency are the often characterized properties of distributions. Analysis may judge whether data has a strong or a weak central tendency based on its dispersion. 