• Keine Ergebnisse gefunden

Simple Search Engine

N/A
N/A
Protected

Academic year: 2022

Aktie "Simple Search Engine"

Copied!
13
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

A Simple Search Engine

Benjamin Roth

CIS LMU

(2)

Document Collection for Search Engine

Now that we have a documents, let’s represent a collection of documents for search.

What does a such a class for representing a document collection need?

I Information to store?

I Functionality?

(3)

Document Collection for Search Engine

What does a class need for representing a document collection for search?

Information to store:

I Store thedocuments, and access them via an id.

I Aninverted index: A map from each term to all documents containing that term. (For efficiently finding all potentially relevant documents)

I Thedocument frequency for each terms (number of documents in which it occurs), to be used in similarity computation.

Functionality:

I Read documents(from directory)

I Return (all) documentsthat contain (all) terms of aquery.

I Reweight token frequenciesby tf-idf weighting.

I Computecosine-similarityfor two documents.

(4)

Document Collection (Code Skeleton)

class DocumentCollection:

def __init__(self, term_to_df, term_to_docids, \ docid_to_doc):

#...

@classmethod

def from_dir(cls, root_dir, file_suffix):

#...

@classmethod

def from_document_list(cls, docs):

#...

def docs_with_all_tokens(self, tokens):

#...

def tfidf(self, counts):

#...

def cosine_similarity(self, docA, docB):

#...

(5)

Detail: Constructor

Set all the required data fields

def __init__(self, term_to_df, term_to_docids, docid_to_doc):

# string to int

self.term_to_df = term_to_df

# string to set of string

self.term_to_docids = term_to_docids

# string to TextDocument

self.docid_to_doc = docid_to_doc

(6)

Detail: Get all documents containing all search terms

def docs_with_all_tokens(self, tokens):

docids_for_each_token = [self.term_to_docids[token] \ for token in tokens]

docids = set.intersection(*docids_for_each_token) return [self.docid_to_doc[id] for id in docids]

What doesdocids for each token contain?

What is contained in docids?

How can we get all documents that contain anyof the search terms?

Bonus: What could be (roughly) the time complexity of set.intersection(...)?

(7)

Detail: Get all documents containing all search terms

What doesdocids for each token contain?

List of set of document ids. (For each search term one set) What is contained in docids?

The intersection of the above sets. The ids of those documents that contain all terms.

How can we get all documents that contain anyof the search terms?

Use set union instead of intersection.

Bonus: What could be (roughly) the time complexity of set.intersection(...)? A simple algorithm would be:

I For each document id in any of the sets check wether it is contained in all of the other sets.

I If yes, add to result set.

I You can assume that checking set inclusion, and adding to a set takes constant time.

I Complexity: O(nm), where n is number of search terms, m is number of document ids in all sets.

I A more efficient algorithm would use sorted lists of document ids (posting lists).

(8)

Detail: Tf.Idf Weighting

def tfidf(self, counts):

N = len(self.docid_to_doc)

return {tok: tf * math.log(N/self.term_to_df[tok]) for \ tok,tf in counts.items() if tok in self.term_to_df}

Input (dictionary): term ⇒ counts of term in document Output (dictionary): term⇒ weighted counts

Remember formulas:

I Term frequency is just the number of occurrences of the term (we use the simple, unnormalized version).

I Inverse document frequency:

log N dft

where N is the size of the document collection anddft is the number of documents termt occurrs in.

(9)

Detail: Cosine Similarity

def cosine_similarity(self, docA, docB):

weightedA = self.tfidf(docA.token_counts) weightedB = self.tfidf(docB.token_counts) dotAB = dot(weightedA, weightedB)

normA = math.sqrt(dot(weightedA, weightedA)) normB = math.sqrt(dot(weightedB, weightedB)) if normA == 0 or normB == 0:

return 0.

else:

return dotAB / (normA * normB)

Input (dictionaries): term frequencies of two documents.

Output: Cosine similarity of tf.idf weighted document vectors.

How woulddothelper function look like?

What is the meaning of normA andnormB?

When cannormA or normBbe zero?

(10)

Detail: Cosine Similarity

How woulddothelper function look like?

def dot(dictA, dictB):

return sum([dictA.get(tok) * dictB.get(tok,0) for \ tok in dictA])

What is the meaning of normA andnormB?

Vector norm (l2). It is defined as the square root of the dot product of a vector with itself:

|v|2= s

X

i

vi2

Intuitively it measures the “length” of a document, and is high if a document contains many terms.

When cannormA or normBbe zero? When a query only contains out-of-vocabulary words (tfidf(...) filters those words out).

(11)

Putting it all together: Search Engine

Most of the functionality is already contained in the DocumentCollectionclass.

The search engine only has to

I Preprocess (tokenize) the query.

I Call the respective methods (e.g. docs with all tokens, cosine similarity)

I Sort the results to put most similar results first.

I Select some text snippets for displaying to the user.

(12)

Search Engine: Code Skeleton

class SearchEngine:

def __init__(self, doc_collection):

#...

def ranked_documents(self, query):

#...

def snippets(self, query, document, window=50):

#...

See full implementation in the lecture repository.

(13)

Summary

Representing

I Text documents

I Document collections

Factory method constructors Retrieving documents Computing similarity ... Questions?

Referenzen

ÄHNLICHE DOKUMENTE

In this paper, we have shown how to compute the period lattice of loosely periodic func- tions, and applied the technique to the computation of the unit group of a finite extension K

In the end, we feed the CurryInfo data structure with the specific module, function and type information of a given Curry program, and our CurryDoc extension writes the data

find the tuple in the table that matches the given tuple best Example: Do we already have customer X in the database?.. Similarity

k: edit distance threshold Q1: edit distance without filters Q2: edit distance with filters.. Salzburg) Similarity Search WS 2021/22 36 / 41. The

The forest distance between two ordered forests is the minimum cost sequence of node edit operations (node deletion, node insertion, node rename) that transforms one forest into

lower bound of the unit cost tree edit distance trees are split into binary branches (small subgraphs) similar trees have many common binary branches complexity O(n log n) time.

The forest distance between two ordered forests is the minimum cost sequence of node edit operations (node deletion, node insertion, node rename) that transforms on forest into

We count all pq-grams whose leftmost leaf is a dummy node: Each leaf is the anchor node of exactly one pq-gram whose leftmost leaf is a dummy node, giving l pq-grams.. The