Sentence splitter
Node parsers.
HTMLNodeParser #
Bases: NodeParser
HTML node parser.
Splits a document into Nodes using custom HTML splitting logic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_metadata
|
bool
|
whether to include metadata in nodes |
required |
include_prev_next_rel
|
bool
|
whether to include prev/next relationships |
required |
tags
|
List[str]
|
HTML tags to extract text from. |
['p', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'li', 'b', 'i', 'u', 'section']
|
Source code in llama_index/core/node_parser/file/html.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
class_name
classmethod
#
class_name() -> str
Get class name.
Source code in llama_index/core/node_parser/file/html.py
51 52 53 54 | |
get_nodes_from_node #
Get nodes from document.
Source code in llama_index/core/node_parser/file/html.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
JSONNodeParser #
Bases: NodeParser
JSON node parser.
Splits a document into Nodes using custom JSON splitting logic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_metadata
|
bool
|
whether to include metadata in nodes |
required |
include_prev_next_rel
|
bool
|
whether to include prev/next relationships |
required |
Source code in llama_index/core/node_parser/file/json.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
class_name
classmethod
#
class_name() -> str
Get class name.
Source code in llama_index/core/node_parser/file/json.py
40 41 42 43 | |
get_nodes_from_node #
Get nodes from document.
Source code in llama_index/core/node_parser/file/json.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | |
MarkdownNodeParser #
Bases: NodeParser
Markdown node parser.
Splits a document into Nodes using Markdown header-based splitting logic. Each node contains its text content and the path of headers leading to it.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_metadata
|
bool
|
whether to include metadata in nodes |
required |
include_prev_next_rel
|
bool
|
whether to include prev/next relationships |
required |
header_path_separator
|
str
|
separator char used for section header path metadata |
'/'
|
Source code in llama_index/core/node_parser/file/markdown.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | |
get_nodes_from_node #
Get nodes from document by splitting on headers.
Source code in llama_index/core/node_parser/file/markdown.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
SimpleFileNodeParser #
Bases: NodeParser
Simple file node parser.
Splits a document loaded from a file into Nodes using logic based on the file type automatically detects the NodeParser to use based on file type
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_metadata
|
bool
|
whether to include metadata in nodes |
required |
include_prev_next_rel
|
bool
|
whether to include prev/next relationships |
required |
Source code in llama_index/core/node_parser/file/simple_file.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | |
class_name
classmethod
#
class_name() -> str
Get class name.
Source code in llama_index/core/node_parser/file/simple_file.py
49 50 51 52 | |
MetadataAwareTextSplitter #
Bases: TextSplitter
Source code in llama_index/core/node_parser/interface.py
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 | |
NodeParser #
Bases: TransformComponent, ABC
Base interface for node parser.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_metadata
|
bool
|
Whether or not to consider metadata when splitting. |
True
|
include_prev_next_rel
|
bool
|
Include prev/next node relationships. |
True
|
callback_manager
|
CallbackManager
|
|
<llama_index.core.callbacks.base.CallbackManager object at 0x7f5a821e96d0>
|
id_func
|
Annotated[Callable, FieldInfo, BeforeValidator, WithJsonSchema, WithJsonSchema, PlainSerializer] | None
|
Function to generate node IDs. |
None
|
Source code in llama_index/core/node_parser/interface.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | |
get_nodes_from_documents #
get_nodes_from_documents(documents: Sequence[Document], show_progress: bool = False, **kwargs: Any) -> List[BaseNode]
Parse documents into nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
documents
|
Sequence[Document]
|
documents to parse |
required |
show_progress
|
bool
|
whether to show progress bar |
False
|
Source code in llama_index/core/node_parser/interface.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | |
TextSplitter #
Bases: NodeParser
Source code in llama_index/core/node_parser/interface.py
210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | |
HierarchicalNodeParser #
Bases: NodeParser
Hierarchical node parser.
Splits a document into a recursive hierarchy Nodes using a NodeParser.
NOTE: this will return a hierarchy of nodes in a flat list, where there will be overlap between parent nodes (e.g. with a bigger chunk size), and child nodes per parent (e.g. with a smaller chunk size).
For instance, this may return a list of nodes like:
- list of top-level nodes with chunk size 2048
- list of second-level nodes, where each node is a child of a top-level node, chunk size 512
- list of third-level nodes, where each node is a child of a second-level node, chunk size 128
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
chunk_sizes
|
List[int] | None
|
The chunk sizes to use when splitting documents, in order of level. |
None
|
node_parser_ids
|
List[str]
|
List of ids for the node parsers to use when splitting documents, in order of level (first id used for first level, etc.). |
<dynamic>
|
node_parser_map
|
Dict[str, NodeParser]
|
Map of node parser id to node parser. |
required |
Source code in llama_index/core/node_parser/relational/hierarchical.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 | |
get_nodes_from_documents #
get_nodes_from_documents(documents: Sequence[Document], show_progress: bool = False, **kwargs: Any) -> List[BaseNode]
Parse document into nodes.
Source code in llama_index/core/node_parser/relational/hierarchical.py
207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 | |
MarkdownElementNodeParser #
Bases: BaseElementNodeParser
Markdown element node parser.
Splits a markdown document into Text Nodes and Index Nodes corresponding to embedded objects (e.g. tables).
Source code in llama_index/core/node_parser/relational/markdown_element.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 | |
get_nodes_from_node #
Get nodes from node.
Source code in llama_index/core/node_parser/relational/markdown_element.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | |
aget_nodes_from_node
async
#
Get nodes from node.
Source code in llama_index/core/node_parser/relational/markdown_element.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | |
extract_html_tables #
extract_html_tables(elements: List[Element]) -> List[Element]
Extract html tables from text.
Returns:
| Type | Description |
|---|---|
List[Element]
|
List[Element]: text elements split by table_text element |
Source code in llama_index/core/node_parser/relational/markdown_element.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | |
extract_elements #
extract_elements(text: str, node_id: Optional[str] = None, table_filters: Optional[List[Callable]] = None, **kwargs: Any) -> List[Element]
Extract elements from text.
Source code in llama_index/core/node_parser/relational/markdown_element.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 | |
filter_table #
filter_table(table_element: Any) -> bool
Filter tables.
Source code in llama_index/core/node_parser/relational/markdown_element.py
288 289 290 291 292 293 | |
UnstructuredElementNodeParser #
Bases: BaseElementNodeParser
Unstructured element node parser.
Splits a document into Text Nodes and Index Nodes corresponding to embedded objects (e.g. tables).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
partitioning_parameters
|
Dict[str, Any] | None
|
Extra dictionary representing parameters of the partitioning process. |
{}
|
Source code in llama_index/core/node_parser/relational/unstructured_element.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | |
get_nodes_from_node #
Get nodes from node.
Source code in llama_index/core/node_parser/relational/unstructured_element.py
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | |
aget_nodes_from_node
async
#
Get nodes from node.
Source code in llama_index/core/node_parser/relational/unstructured_element.py
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | |
extract_elements #
extract_elements(text: str, table_filters: Optional[List[Callable]] = None, **kwargs: Any) -> List[Element]
Extract elements from text.
Source code in llama_index/core/node_parser/relational/unstructured_element.py
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | |
filter_table #
filter_table(table_element: Any) -> bool
Filter tables.
Source code in llama_index/core/node_parser/relational/unstructured_element.py
136 137 138 139 140 141 | |
LlamaParseJsonNodeParser #
Bases: BaseElementNodeParser
Llama Parse Json format element node parser.
Splits a json format document from LlamaParse into Text Nodes and Index Nodes corresponding to embedded objects (e.g. tables).
Source code in llama_index/core/node_parser/relational/llama_parse_json_element.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | |
get_nodes_from_node #
Get nodes from node.
Source code in llama_index/core/node_parser/relational/llama_parse_json_element.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | |
aget_nodes_from_node
async
#
Get nodes from node.
Source code in llama_index/core/node_parser/relational/llama_parse_json_element.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | |
extract_elements #
extract_elements(text: str, mode: Optional[str] = 'json', node_id: Optional[str] = None, node_metadata: Optional[Dict[str, Any]] = None, table_filters: Optional[List[Callable]] = None, **kwargs: Any) -> List[Element]
Extract elements from json based nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
node's text content |
required |
mode
|
Optional[str]
|
different modes for returning different types of elements based on the selected mode |
'json'
|
node_id
|
Optional[str]
|
unique id for the node |
None
|
node_metadata
|
Optional[Dict[str, Any]]
|
metadata for the node. the json output for the nodes contains a lot of fields for elements |
None
|
Source code in llama_index/core/node_parser/relational/llama_parse_json_element.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | |
filter_table #
filter_table(table_element: Any) -> bool
Filter tables.
Source code in llama_index/core/node_parser/relational/llama_parse_json_element.py
298 299 300 301 302 303 304 | |
CodeSplitter #
Bases: TextSplitter
Split code using a AST parser.
Thank you to Kevin Lu / SweepAI for suggesting this elegant code splitting solution. https://docs.sweep.dev/blogs/chunking-2m-files
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
language
|
str
|
The programming language of the code being split. |
required |
chunk_lines
|
int
|
The number of lines to include in each chunk. |
40
|
chunk_lines_overlap
|
int
|
How many lines of code each chunk overlaps with. |
15
|
max_chars
|
int
|
Maximum number of characters per chunk. |
1500
|
Source code in llama_index/core/node_parser/text/code.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | |
from_defaults
classmethod
#
from_defaults(language: str, chunk_lines: int = DEFAULT_CHUNK_LINES, chunk_lines_overlap: int = DEFAULT_LINES_OVERLAP, max_chars: int = DEFAULT_MAX_CHARS, callback_manager: Optional[CallbackManager] = None, parser: Any = None) -> CodeSplitter
Create a CodeSplitter with default values.
Source code in llama_index/core/node_parser/text/code.py
96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | |
split_text #
split_text(text: str) -> List[str]
Split incoming code into chunks using the AST parser.
This method parses the input code into an AST and then chunks it while preserving syntactic structure. It handles error cases and ensures the code can be properly parsed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
text
|
str
|
The source code text to split. |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
List[str]: A list of code chunks. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the code cannot be parsed for the specified language. |
Source code in llama_index/core/node_parser/text/code.py
155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | |
LangchainNodeParser #
Bases: TextSplitter
Basic wrapper around langchain's text splitter.
TODO: Figure out how to make this metadata aware.
Source code in llama_index/core/node_parser/text/langchain.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | |
split_text #
split_text(text: str) -> List[str]
Split text into sentences.
Source code in llama_index/core/node_parser/text/langchain.py
43 44 45 | |
SemanticSplitterNodeParser #
Bases: NodeParser
Semantic node parser.
Splits a document into Nodes, with each node being a group of semantically related sentences.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
buffer_size
|
int
|
number of sentences to group together when evaluating semantic similarity |
1
|
embed_model
|
BaseEmbedding
|
(BaseEmbedding): embedding model to use |
required |
sentence_splitter
|
Optional[Callable]
|
splits text into sentences |
<function split_by_sentence_tokenizer.<locals>.<lambda> at 0x7f5a7dd35b20>
|
breakpoint_percentile_threshold
|
int
|
dissimilarity threshold for creating semantic breakpoints, lower value will generate more nodes |
95
|
include_metadata
|
bool
|
whether to include metadata in nodes |
required |
include_prev_next_rel
|
bool
|
whether to include prev/next relationships |
required |
Source code in llama_index/core/node_parser/text/semantic_splitter.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 | |
build_semantic_nodes_from_documents #
build_semantic_nodes_from_documents(documents: Sequence[Document], show_progress: bool = False) -> List[BaseNode]
Build window nodes from documents.
Source code in llama_index/core/node_parser/text/semantic_splitter.py
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | |
abuild_semantic_nodes_from_documents
async
#
abuild_semantic_nodes_from_documents(documents: Sequence[Document], show_progress: bool = False) -> List[BaseNode]
Asynchronously build window nodes from documents.
Source code in llama_index/core/node_parser/text/semantic_splitter.py
195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 | |
SemanticDoubleMergingSplitterNodeParser #
Bases: NodeParser
Semantic double merging text splitter.
Splits a document into Nodes, with each node being a group of semantically related sentences.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
language_config
|
LanguageConfig
|
chooses language and spacy language model to be used |
<llama_index.core.node_parser.text.semantic_double_merging_splitter.LanguageConfig object at 0x7f5a84d1e0f0>
|
initial_threshold
|
float
|
sets threshold for initializing new chunk |
0.6
|
appending_threshold
|
float
|
sets threshold for appending new sentences to chunk |
0.8
|
merging_threshold
|
float
|
sets threshold for merging whole chunks |
0.8
|
max_chunk_size
|
int
|
maximum size of chunk (in characters) |
1000
|
merging_range
|
int
|
How many chunks 'ahead' beyond the nearest neighbor to be merged if similar (1 or 2 available) |
1
|
merging_separator
|
str
|
The separator to use when merging chunks. Defaults to a single space. |
' '
|
sentence_splitter
|
Optional[Callable]
|
splits text into sentences |
<function split_by_sentence_tokenizer.<locals>.<lambda> at 0x7f5a7dd34680>
|
Source code in llama_index/core/node_parser/text/semantic_double_merging_splitter.py
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | |
build_semantic_nodes_from_documents #
Build window nodes from documents.
Source code in llama_index/core/node_parser/text/semantic_double_merging_splitter.py
198 199 200 201 202 203 | |
build_semantic_nodes_from_nodes #
Build window nodes from nodes.
Source code in llama_index/core/node_parser/text/semantic_double_merging_splitter.py
205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 | |
SentenceSplitter #
Bases: MetadataAwareTextSplitter
Parse text with a preference for complete sentences.
In general, this class tries to keep sentences and paragraphs together. Therefore compared to the original TokenTextSplitter, there are less likely to be hanging sentences or parts of sentences at the end of the node chunk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
chunk_size
|
int
|
The token chunk size for each chunk. |
1024
|
chunk_overlap
|
int
|
The token overlap of each chunk when splitting. |
200
|
separator
|
str
|
Default separator for splitting into words |
' '
|
paragraph_separator
|
str
|
Separator between paragraphs. |
'\n\n\n'
|
secondary_chunking_regex
|
str | None
|
Backup regex for splitting into sentences. |
'[^,.;。?!]+[,.;。?!]?|[,.;。?!]'
|
Source code in llama_index/core/node_parser/text/sentence.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 | |
from_defaults
classmethod
#
from_defaults(separator: str = ' ', chunk_size: int = DEFAULT_CHUNK_SIZE, chunk_overlap: int = SENTENCE_CHUNK_OVERLAP, tokenizer: Optional[Callable] = None, paragraph_separator: str = DEFAULT_PARAGRAPH_SEP, chunking_tokenizer_fn: Optional[Callable[[str], List[str]]] = None, secondary_chunking_regex: str = CHUNKING_REGEX, callback_manager: Optional[CallbackManager] = None, include_metadata: bool = True, include_prev_next_rel: bool = True) -> SentenceSplitter
Initialize with parameters.
Source code in llama_index/core/node_parser/text/sentence.py
123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | |
SentenceWindowNodeParser #
Bases: NodeParser
Sentence window node parser.
Splits a document into Nodes, with each node being a sentence. Each node contains a window from the surrounding sentences in the metadata.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sentence_splitter
|
Optional[Callable]
|
splits text into sentences |
<function split_by_sentence_tokenizer.<locals>.<lambda> at 0x7f5a7dd34680>
|
include_metadata
|
bool
|
whether to include metadata in nodes |
required |
include_prev_next_rel
|
bool
|
whether to include prev/next relationships |
required |
window_size
|
int
|
The number of sentences on each side of a sentence to capture. |
3
|
window_metadata_key
|
str
|
The metadata key to store the sentence window under. |
'window'
|
original_text_metadata_key
|
str
|
The metadata key to store the original sentence in. |
'original_text'
|
Source code in llama_index/core/node_parser/text/sentence_window.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | |
build_window_nodes_from_documents #
Build window nodes from documents.
Source code in llama_index/core/node_parser/text/sentence_window.py
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | |
TokenTextSplitter #
Bases: MetadataAwareTextSplitter
Implementation of splitting text that looks at word tokens.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
chunk_size
|
int
|
The token chunk size for each chunk. |
1024
|
chunk_overlap
|
int
|
The token overlap of each chunk when splitting. |
20
|
separator
|
str
|
Default separator for splitting into words |
' '
|
backup_separators
|
List
|
Additional separators for splitting. |
<dynamic>
|
keep_whitespaces
|
bool
|
Whether to keep leading/trailing whitespaces in the chunk. |
False
|
Source code in llama_index/core/node_parser/text/token.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 | |
from_defaults
classmethod
#
from_defaults(chunk_size: int = DEFAULT_CHUNK_SIZE, chunk_overlap: int = DEFAULT_CHUNK_OVERLAP, separator: str = ' ', backup_separators: Optional[List[str]] = ['\n'], callback_manager: Optional[CallbackManager] = None, keep_whitespaces: bool = False, include_metadata: bool = True, include_prev_next_rel: bool = True, id_func: Optional[Callable[[int, Document], str]] = None) -> TokenTextSplitter
Initialize with default parameters.
Source code in llama_index/core/node_parser/text/token.py
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | |
split_text_metadata_aware #
split_text_metadata_aware(text: str, metadata_str: str) -> List[str]
Split text into chunks, reserving space required for metadata str.
Source code in llama_index/core/node_parser/text/token.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | |
split_text #
split_text(text: str) -> List[str]
Split text into chunks.
Source code in llama_index/core/node_parser/text/token.py
138 139 140 | |
get_leaf_nodes #
Get leaf nodes.
Source code in llama_index/core/node_parser/relational/hierarchical.py
25 26 27 28 29 30 31 | |
get_root_nodes #
Get root nodes.
Source code in llama_index/core/node_parser/relational/hierarchical.py
34 35 36 37 38 39 40 | |
get_child_nodes #
Get child nodes of nodes from given all_nodes.
Source code in llama_index/core/node_parser/relational/hierarchical.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | |
get_deeper_nodes #
Get children of root nodes in given nodes that have given depth.
Source code in llama_index/core/node_parser/relational/hierarchical.py
61 62 63 64 65 66 67 68 69 70 71 72 73 | |
options: members: - SentenceSplitter