Can Global Semantic Context Improve Neural Language Models? I don't know, but that's the question asked and answered in the latest entry on Apple's Machine Learning Journal.

From Apple:

Today, most techniques for training word embeddings capture the local context of a given word in a sentence as a window containing a relatively small number of words (say, 5) before and after the word in question—"the company it keeps" nearby. For example, the word "self-evident" in the U.S. Declaration of Independence has a local context given by "hold these truths to be" on the left and "that all men are created" on the right.

In this article, we describe an extension of this approach to one that captures instead the entire semantic fabric of the document—for example, the entire Declaration of Independence. Can this global semantic context result in better language models? Let's first take a look at the current use of word embeddings.

It's heady stuff but a good read for anyone interested in how Apple is working to make Siri and systems like QuickType better.

VECTOR | Rene Ritchie

Main

This post may contain affiliate links. See our disclosure policy for more details.