返聘是什么意思| 跳蛋是什么感觉| 梦见雪是什么征兆| 混不吝是什么意思| 朱砂红是什么颜色| 汉语拼音什么时候发明的| 间接胆红素偏高是什么意思| 宝宝流鼻血是什么原因| 胆红素尿呈什么颜色| 下巴上有痣代表什么| 嫡孙是什么意思| 墨西哥用什么货币| 吃瓜什么意思| 哮喘咳嗽吃什么药好得快| 眉毛长痘是什么原因| 菊花什么时候开花| 梦到男孩子是什么意思| 雍正叫什么名字| 中性粒细胞比率偏低是什么意思| 劲酒是什么酒| 剥苔舌是什么原因| 芒种是什么时候| 辅酶q10有什么作用| 五月份是什么季节| 女性潮红是什么意思| 中耳炎去药店买什么药| 温柔的动物是什么生肖| 玉竹有什么功效| 吃空饷什么意思| 人为什么会怕鬼| 高处不胜寒的胜是什么意思| 吃什么对牙齿有好处| 经期为什么不能拔牙| 痛风应该挂什么科| 丁毒豆泡酒能治什么病| 塞浦路斯说什么语言| 中国的国树是什么树| bff是什么意思| 蝉又叫什么名字| 胃不消化吃什么药| 懿字五行属什么| 隽读什么| 可转债是什么| sp是什么| 莫名其妙的名是什么意思| 淋巴结清扫是什么意思| 干预治疗是什么意思| 什么东西最养胃| hh是什么品牌| 开车压到猫有什么预兆| 神经性皮炎用什么药膏好| od什么意思| 反常是什么意思| 蜜枣是什么枣做的| 分泌物过氧化氢阳性是什么意思| 戌是什么生肖| 转氨酶高吃什么药好| 2.4号是什么星座| 格林是什么意思| 眩晕症是什么原因引起| 茯苓什么味道| 上报是什么意思| b型钠尿肽高说明什么| 秋季养胃吃什么好| swisse是什么药| 松花粉有什么功效| 家里进蛇有什么预兆| 电是什么时候发明的| 62岁属什么| 腰胀是什么原因引起的| 咳嗽吃什么好的快| 阴道感染有什么症状| 瞒天过海是什么意思| 猪八戒的真名叫什么| 经常咳嗽是什么原因| 怂包是什么意思| 什么是白茶| 守望先锋是什么类型的游戏| 细胞是什么| 碘伏过敏是什么症状| 尼麦角林片治什么病| 圣母什么意思| 孩子脾虚内热大便干吃什么药| 规格型号是什么意思| 清肺热用什么泡水喝比较好| 胸骨疼挂什么科| 1963年属兔的是什么命| 6.30是什么星座| 有什么瓜| 举案齐眉是什么意思| 命中注定是什么意思| 亚麻是什么面料| 什么是肾癌| 四面楚歌是什么意思| 毛囊是什么样子图片| 女性胃火旺吃什么药| 减肥期间可以吃什么| 香蕉人是什么意思| 群青色是什么颜色| 肺部阴影意味着什么| 滑囊炎吃什么药| 84年属什么生肖| 梦到被狗咬是什么意思| 丹参长什么样子图片| 脚踝肿是什么原因引起的| 郁郁寡欢的意思是什么| 蟋蟀吃什么东西| 黄辣丁吃什么食物| 小孩疝气是什么症状| 尿道炎是什么症状| 左脸长痘是什么原因| ed2k用什么下载| 什么是抗性淀粉| 湄公鱼是什么鱼| 吃什么会自然流产| 黑卡是什么意思| 牙周炎挂什么科| 糖尿病能吃什么| 吃什么降血压效果最好| 1999年是什么生肖| 莫言是什么意思| 属龙和什么属相最配| 1940年中国发生了什么| 什么是命中注定| 月字旁的有什么字| 报道是什么意思| afd是什么意思| 小肚子鼓鼓的什么原因| 排骨炖山药有什么功效| 咳嗽买什么药| 羊肉和什么食物相克| 口腔疱疹吃什么药| 手指痛挂什么科| 男人血精是什么原因造成的| 5月1日什么星座| 光阴是什么意思| 爱放屁吃什么药| 夫妻肺片有什么材料| 小孩吃什么能长高| 女大一抱金鸡是什么意思| 今年43岁属什么| 什么是肝阳上亢| 水杉是什么植物| 中国的国球是什么球| 灵魂摆渡是什么意思| 小孩睡觉流鼻血是什么原因引起的| 相向而行是什么意思| 包含是什么意思| 哮喘是什么症状| 纯阳之人有什么特征| 冷暴力是什么| 为什么一般不检查小肠| 身体欠佳什么意思| 牛骨头炖什么好吃| 洒水车的音乐是什么歌| 全麻对身体有什么影响| 你为什么不说话歌词| 风寒感冒吃什么食物| 皂角米有什么功效| 右眼一直跳是什么预兆| 凝血酶时间是什么意思| 神态自若是什么意思| 司是什么级别| 为什么要写作业| 胃溃疡是什么症状| 什么样的荷花| 麦五行属什么| 少一个睾丸有什么影响| 舌苔白腻是什么原因| coach什么意思| pending是什么状态| 办理身份证需要带什么| 来年是什么意思| 减肥能吃什么水果| 血糖高吃什么降血糖| 尿检肌酐高是什么原因| 什么富什么车| 胆囊结石不宜吃什么| 经常跑步对身体有什么好处| 尿酸高适合喝什么汤| 世界上什么动物牙齿最多| 谪仙是什么意思| 补中益气丸适合什么人吃| 饮水思源是什么意思| 右肾占位是什么意思| 因地制宜是什么意思| 什么叫血栓| 瘢痕是什么| 伸舌头锻炼有什么好处| 火花是什么| 嘚瑟是什么意思| 肚脐下方硬硬的是什么| 阿司匹林不能和什么药一起吃| 初中毕业可以考什么证| 肾虚吃什么补| 付之一炬什么意思| 路痴是什么原因造成的| pct什么意思| c12是什么| 喝生鸡蛋有什么好处| 防晒霜和防晒乳有什么区别| 梅菜在北方叫什么菜| 去除扁平疣用什么药膏| 乌龟不吃食是什么原因| 维生素b2有什么作用和功效| 下半夜咳嗽是什么原因| 伤口用什么消毒| 什么补血快| 籍贯写什么| 采耳是什么意思| 与自己和解什么意思| 胃酸不能吃什么食物| 葛根粉有什么功效| 湿疹和荨麻疹有什么区别| 质子泵抑制剂是什么药| 心三联是指什么| 肠炎什么症状| 善变是什么意思| 肉痣长什么样子图片| 链球菌感染是什么病| 鼻子干痒是什么原因| 中耳炎有什么症状| 心率失常是什么意思| omega3是什么| 品牌pr是什么意思| 恶露后期为什么是臭的| 雪里红是什么| 英短蓝猫吃什么猫粮好| 阴囊潮湿瘙痒用什么药| 缺锌吃什么食物和水果| 虎是什么意思| 焦虑症挂什么科| 一个土一个贝念什么| reads是什么意思| 什么叫脑白质病| 龙骨是什么骨头| 虢是什么意思| 精神伴侣是什么意思| 梦见孩子拉粑粑是什么意思| 智是什么意思| 六畜兴旺是什么意思| 肝功能谷丙转氨酶偏高是什么原因| 叶黄素有什么功效| 翻什么越什么| 什么地方能出生入死| 省长什么级别| 老人头晕吃什么药效果好| 南京市徽为什么是貔貅| 身份证前六位代表什么| 鼻子长痘是什么原因| 梦见亲人哭是什么征兆| 老年人吃什么奶粉好| 牙黄是什么原因引起的| 男人本色是什么意思| 比五行属什么| 6月18是什么日子| 巨蟹座是什么象| 枸杞和红枣泡水喝有什么好处| 凶猛的动物是什么生肖| 肺炎咳嗽吃什么药| 不什么不| 3月9号是什么星座| 太阳为什么会发光发热| 什么叫筋膜炎| 百度Jump to content

Molex 推出具有高电磁干扰屏蔽性能的EMI适配器具

From Wikipedia, the free encyclopedia
百度 1661年,郑成功挥师东征,收复了被荷兰侵占38年的祖国领土台湾。

Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data.[1] Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.[2]

Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive text corpus obtained by web crawling, with only minor filtering (such as Common Crawl). This compares favorably to supervised learning, where the dataset (such as the ImageNet1000) is typically constructed manually, which is much more expensive.

There were algorithms designed specifically for unsupervised learning, such as clustering algorithms like k-means, dimensionality reduction techniques like principal component analysis (PCA), Boltzmann machine learning, and autoencoders. After the rise of deep learning, most large-scale unsupervised learning have been done by training general-purpose neural network architectures by gradient descent, adapted to performing unsupervised learning by designing an appropriate training procedure.

Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.[3][4] As another example, autoencoders are trained to good features, which can then be used as a module for other models, such as in a latent diffusion model.

Tasks

[edit]
Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.

Tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see Venn diagram); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward, some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of dropout, ReLU, and adaptive learning rates.

A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the denoising autoencoders and BERT.

Neural network architectures

[edit]

Training

[edit]

During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.

In contrast to supervised methods' dominant use of backpropagation, unsupervised learning also employs other methods including: Hopfield learning rule, Boltzmann learning rule, Contrastive Divergence, Wake Sleep, Variational Inference, Maximum Likelihood, Maximum A Posteriori, Gibbs Sampling, and backpropagating reconstruction errors or hidden state reparameterizations. See the table below for more details.

Energy

[edit]

An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion , where k is the Boltzmann constant and T is temperature. In the RBM network the relation is ,[5] where and vary over every possible activation pattern and . To be more precise, , where is an activation pattern of all neurons (visible and hidden). Hence, some early neural networks bear the name Boltzmann Machine. Paul Smolensky calls the Harmony. A network seeks low energy which is high Harmony.

Networks

[edit]

This table shows connection diagrams of various unsupervised networks, the details of which will be given in the section Comparison of Networks. Circles are neurons and edges between them are connection weights. As network design changes, features are added on to enable new capabilities or removed to make learning faster. For instance, neurons change between deterministic (Hopfield) and stochastic (Boltzmann) to allow robust output, weights are removed within a layer (RBM) to hasten learning, or connections are allowed to become asymmetric (Helmholtz).

Hopfield Boltzmann RBM Stacked Boltzmann
A network based on magnetic domains in iron with a single self-connected layer. It can be used as a content addressable memory.
Network is separated into 2 layers (hidden vs. visible), but still using symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.
Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.
This network has multiple RBM's to encode a hierarchy of hidden features. After a single RBM is trained, another blue hidden layer (see left RBM) is added, and the top 2 layers are trained as a red & blue RBM. Thus the middle layers of an RBM acts as hidden or visible, depending on the training phase it is in.
Helmholtz Autoencoder VAE
Instead of the bidirectional symmetric connection of the stacked Boltzmann machines, we have separate one-way connections to form a loop. It does both generation and discrimination.
A feed forward network that aims to find a good middle layer representation of its input world. This network is deterministic, so it is not as robust as its successor the VAE.
Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions. The stochastic nature allows for more robust imagination than the deterministic autoencoder.

Of the networks bearing people's names, only Hopfield worked directly with neural networks. Boltzmann and Helmholtz came before artificial neural networks, but their work in physics and physiology inspired the analytical methods that were used.

History

[edit]
1974 Ising magnetic model proposed by WA Little [de] for cognition
1980 Kunihiko Fukushima introduces the neocognitron, which is later called a convolutional neural network. It is mostly used in SL, but deserves a mention here.
1982 Ising variant Hopfield net described as CAMs and classifiers by John Hopfield.
1983 Ising variant Boltzmann machine with probabilistic neurons described by Hinton & Sejnowski following Sherington & Kirkpatrick's 1975 work.
1986 Paul Smolensky publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give a practical training scheme. Hinton did in mid-2000s.
1995 Schmidthuber introduces the LSTM neuron for languages.
1995 Dayan & Hinton introduces Helmholtz machine
2013 Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.

Specific Networks

[edit]

Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.

Hopfield Network
Ferromagnetism inspired Hopfield networks. A neuron correspond to an iron domain with binary magnetic moments Up and Down, and neural connections correspond to the domain's influence on each other. Symmetric connections enable a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights and the right energy functions guarantees convergence to a stable activation pattern. Asymmetric weights are difficult to analyze. Hopfield nets are used as Content Addressable Memories (CAM).
Boltzmann Machine
These are stochastic Hopfield nets. Their state value is sampled from this pdf as follows: suppose a binary neuron fires with the Bernoulli probability p(1) = 1/3 and rests with p(0) = 2/3. One samples from it by taking a uniformly distributed random number y, and plugging it into the inverted cumulative distribution function, which in this case is the step function thresholded at 2/3. The inverse function = { 0 if x <= 2/3, 1 if x > 2/3 }.
Sigmoid Belief Net
Introduced by Radford Neal in 1992, this network applies ideas from probabilistic graphical models to neural networks. A key difference is that nodes in graphical models have pre-assigned meanings, whereas Belief Net neurons' features are determined after training. The network is a sparsely connected directed acyclic graph composed of binary stochastic neurons. The learning rule comes from Maximum Likelihood on p(X): Δwij sj * (si - pi), where pi = 1 / ( 1 + eweighted inputs into neuron i ). sj's are activations from an unbiased sample of the posterior distribution and this is problematic due to the Explaining Away problem raised by Judea Perl. Variational Bayesian methods uses a surrogate posterior and blatantly disregard this complexity.
Deep Belief Network
Introduced by Hinton, this network is a hybrid of RBM and Sigmoid Belief Network. The top 2 layers is an RBM and the second layer downwards form a sigmoid belief network. One trains it by the stacked RBM method and then throw away the recognition weights below the top RBM. As of 2009, 3-4 layers seems to be the optimal depth.[6]
Helmholtz machine
These are early inspirations for the Variational Auto Encoders. Its 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".[7] the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.
Variational autoencoder
These are inspired by Helmholtz machines and combines probability network with neural networks. An Autoencoder is a 3-layer CAM network, where the middle layer is supposed to be some internal representation of input patterns. The encoder neural network is a probability distribution qφ(z given x) and the decoder network is pθ(x given z). The weights are named phi & theta rather than W and V as in Helmholtz—a cosmetic difference. These 2 networks here can be fully connected, or use another NN scheme.

Comparison of networks

[edit]
Hopfield Boltzmann RBM Stacked RBM Helmholtz Autoencoder VAE
Usage & notables CAM, traveling salesman problem CAM. The freedom of connections makes this network difficult to analyze. pattern recognition. used in MNIST digits and speech. recognition & imagination. trained with unsupervised pre-training and/or supervised fine tuning. imagination, mimicry language: creative writing, translation. vision: enhancing blurry images generate realistic data
Neuron deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise } stochastic binary Hopfield neuron ← same. (extended to real-valued in mid 2000s) ← same ← same language: LSTM. vision: local receptive fields. usually real valued relu activation. middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
Connections 1-layer with symmetric weights. No self-connections. 2-layers. 1-hidden & 1-visible. symmetric weights. ← same.
no lateral connections within a layer.
top layer is undirected, symmetric. other layers are 2-way, asymmetric. 3-layers: asymmetric weights. 2 networks combined into 1. 3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts. 3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer
Inference & energy Energy is given by Gibbs probability measure : ← same ← same minimize KL divergence inference is only feed-forward. previous UL networks ran forwards AND backwards minimize error = reconstruction error - KLD
Training Δwij = si*sj, for +1/-1 neuron Δwij = e*(pij - p'ij). This is derived from minimizing KLD. e = learning rate, p' = predicted and p = actual distribution. Δwij = e*( < vi hj >data - < vi hj >equilibrium ). This is a form of contrastive divergence w/ Gibbs Sampling. "<>" are expectations. ← similar. train 1-layer at a time. approximate equilibrium state with a 3-segment pass. no back propagation. wake-sleep 2 phase training back propagate the reconstruction error reparameterize hidden state for backprop
Strength resembles physical systems so it inherits their equations ← same. hidden neurons act as internal representatation of the external world faster more practical training scheme than Boltzmann machines trains quickly. gives hierarchical layer of features mildly anatomical. analyzable w/ information theory & statistical mechanics
Weakness hard to train due to lateral connections equilibrium requires too many iterations integer & real-valued neurons are more complicated.

Hebbian Learning, ART, SOM

[edit]

The classical example of unsupervised learning in the study of neural networks is Donald Hebb's principle, that is, neurons that fire together wire together.[8] In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.[9] A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing.[10]

Probabilistic methods

[edit]

Two of the main methods used in unsupervised learning are principal component and cluster analysis. Cluster analysis is used in unsupervised learning to group, or segment, datasets with shared attributes in order to extrapolate algorithmic relationships.[11] Cluster analysis is a branch of machine learning that groups the data that has not been labelled, classified or categorized. Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group.

A central application of unsupervised learning is in the field of density estimation in statistics,[12] though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution conditioned on the label of input data; unsupervised learning intends to infer an a priori probability distribution .

Approaches

[edit]

Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:

Method of moments

[edit]

One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.

In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.[15]

The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.

See also

[edit]

References

[edit]
  1. ^ Wu, Wei. "Unsupervised Learning" (PDF). Archived (PDF) from the original on 14 April 2024. Retrieved 26 April 2024.
  2. ^ Liu, Xiao; Zhang, Fanjin; Hou, Zhenyu; Mian, Li; Wang, Zhaoyu; Zhang, Jing; Tang, Jie (2021). "Self-supervised Learning: Generative or Contrastive". IEEE Transactions on Knowledge and Data Engineering: 1. arXiv:2006.08218. doi:10.1109/TKDE.2021.3090866. ISSN 1041-4347.
  3. ^ Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021.
  4. ^ Li, Zhuohan; Wallace, Eric; Shen, Sheng; Lin, Kevin; Keutzer, Kurt; Klein, Dan; Gonzalez, Joey (2025-08-06). "Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers". Proceedings of the 37th International Conference on Machine Learning. PMLR: 5958–5968.
  5. ^ Hinton, G. (2012). "A Practical Guide to Training Restricted Boltzmann Machines" (PDF). Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science. Vol. 7700. Springer. pp. 599–619. doi:10.1007/978-3-642-35289-8_32. ISBN 978-3-642-35289-8. Archived (PDF) from the original on 2025-08-06. Retrieved 2025-08-06.
  6. ^ "Deep Belief Nets" (video). September 2009. Archived from the original on 2025-08-06. Retrieved 2025-08-06.
  7. ^ Peter, Dayan; Hinton, Geoffrey E.; Neal, Radford M.; Zemel, Richard S. (1995). "The Helmholtz machine". Neural Computation. 7 (5): 889–904. doi:10.1162/neco.1995.7.5.889. hdl:21.11116/0000-0002-D6D3-E. PMID 7584891. S2CID 1890561. Closed access icon
  8. ^ Buhmann, J.; Kuhnel, H. (1992). "Unsupervised and supervised data clustering with competitive neural networks". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. Vol. 4. IEEE. pp. 796–801. doi:10.1109/ijcnn.1992.227220. ISBN 0780305590. S2CID 62651220.
  9. ^ Comesa?a-Campos, Alberto; Bouza-Rodríguez, José Benito (June 2016). "An application of Hebbian learning in the design process decision-making". Journal of Intelligent Manufacturing. 27 (3): 487–506. doi:10.1007/s10845-014-0881-z. ISSN 0956-5515. S2CID 207171436.
  10. ^ Carpenter, G.A. & Grossberg, S. (1988). "The ART of adaptive pattern recognition by a self-organizing neural network" (PDF). Computer. 21 (3): 77–88. doi:10.1109/2.33. S2CID 14625094. Archived from the original (PDF) on 2025-08-06. Retrieved 2025-08-06.
  11. ^ Roman, Victor (2025-08-06). "Unsupervised Machine Learning: Clustering Analysis". Medium. Archived from the original on 2025-08-06. Retrieved 2025-08-06.
  12. ^ Jordan, Michael I.; Bishop, Christopher M. (2004). "7. Intelligent Systems §Neural Networks". In Tucker, Allen B. (ed.). Computer Science Handbook (2nd ed.). Chapman & Hall/CRC Press. doi:10.1201/9780203494455. ISBN 1-58488-360-X. Archived from the original on 2025-08-06. Retrieved 2025-08-06.
  13. ^ Hastie, Tibshirani & Friedman 2009, pp. 485–586
  14. ^ Garbade, Dr Michael J. (2025-08-06). "Understanding K-means Clustering in Machine Learning". Medium. Archived from the original on 2025-08-06. Retrieved 2025-08-06.
  15. ^ Anandkumar, Animashree; Ge, Rong; Hsu, Daniel; Kakade, Sham; Telgarsky, Matus (2014). "Tensor Decompositions for Learning Latent Variable Models" (PDF). Journal of Machine Learning Research. 15: 2773–2832. arXiv:1210.7559. Bibcode:2012arXiv1210.7559A. Archived (PDF) from the original on 2025-08-06. Retrieved 2025-08-06.

Further reading

[edit]
扁桃体肿大有什么症状 龙潭虎穴是什么生肖 小儿手足口病吃什么药 眼皮红肿是什么原因 lively是什么意思
香奈儿是什么品牌 后位子宫什么意思 religion什么意思 丙寅五行属什么 juicy什么意思
戒指中指代表什么意思 海椒是什么辣椒 药店属于什么行业 眼睛有红血丝是什么原因 什么的水井
胸前骨头疼是什么原因 河汉是什么意思 左氧氟沙星氯化钠注射作用是什么 牙齿痛吃什么药最管用 卵泡生成素高是什么原因
经期吃什么让血量增加hcv9jop0ns7r.cn 扶他是什么意思hcv7jop9ns1r.cn 60岁男人喜欢什么样的女人hcv8jop9ns4r.cn 体内湿气重吃什么药hcv7jop6ns8r.cn 做胃肠镜挂什么科0297y7.com
缄默什么意思hcv9jop5ns8r.cn 泡奶粉用什么水最好hcv8jop8ns7r.cn 猪肉不能和什么一起吃hcv8jop3ns2r.cn 尿液发黄什么原因imcecn.com 天蝎座的幸运色是什么hcv8jop0ns6r.cn
12月11号是什么星座hcv9jop2ns7r.cn 为什么喝牛奶会长痘hcv8jop8ns3r.cn 嘴唇轻微发麻什么病兆hcv7jop9ns2r.cn 维生素b族适合什么人吃hcv7jop5ns1r.cn 8月7号是什么星座hcv8jop6ns9r.cn
脾胃虚弱有什么症状hcv8jop8ns6r.cn 有时候会感到莫名的难过是什么歌hcv9jop6ns8r.cn 珏字五行属什么hcv9jop7ns3r.cn 长焦镜头是什么意思hcv8jop7ns8r.cn 狗狗取什么名字hcv8jop5ns8r.cn
百度