spark.core.utils#
Classes#
Support for integer-based Flags |
|
Leaf object for the InheritanceTree data structure. |
|
Tree-like data structure to manage the inheritance status of variables in the Spark Graph Editor. |
Functions#
Converts any string into a consistent lowercase_snake_case format. |
|
|
Converts a string from various programming cases into a human-readable format. |
|
Generates labels for a generalized dot product using Einstein notation. |
|
Generates labels for a generalized dot product using Einstein notation. |
|
Generates labels for a generalized dot product using Einstein notation. |
|
Generates labels for a generalized dot reduction product using Einstein notation. |
|
Generates labels for a generalized dot expansion product using Einstein notation. |
|
Verifies that the object is broadcastable to a valid shape (tuple of integers). |
|
Verifies that the object is broadcastable to a valid list ofshape (a list of tuple of integers). |
|
Merges a list of shapes into a single shape. |
|
Checks if the obj is broadcastable to a shape. |
|
Checks if the obj is broadcastable to a shape. |
|
Check if an object instance is of 'dict[key_cls, value_cls]'. |
|
Check if an object instance is of 'list[cls]'. |
|
Check if an object is a 'DTypeLike'. |
|
Check if an object is a 'DTypeLike'. |
|
Build an ASCII tree from indentation-based text. |
Module Contents#
- spark.core.utils.normalize_str(s)[source]#
Converts any string into a consistent lowercase_snake_case format.
- spark.core.utils.to_human_readable(s, capitalize_all=False)[source]#
Converts a string from various programming cases into a human-readable format.
- Input:
s: str, string to normalize
- Output:
str, human readable string
- spark.core.utils.get_einsum_labels(num_dims, offset=0)[source]#
Generates labels for a generalized dot product using Einstein notation.
- spark.core.utils.get_axes_einsum_labels(axes, ignore_repeated=False)[source]#
Generates labels for a generalized dot product using Einstein notation.
- spark.core.utils.get_einsum_dot_string(x, y, ignore_one_dims=True, side='right')[source]#
- Generates labels for a generalized dot product using Einstein notation.
right: (c,d)•(a,b,c,d)=(a,b) - cd,abcd->ab | (a,b,c,d)•(c,d)=(a,b) - abcd,cd->ab left: (a,b)•(a,b,c,d)=(c,d) - ab,abcd->cd | (a,b,c,d)•(c,d)=(c,d) - abcd,ab->cd
- Parameters:
x (tuple[int, Ellipsis]) – tuple[int, …], shape for the first variable of the dot product
y (tuple[int, Ellipsis]) – tuple[int, …], shape for the second variable of the dot product
ignore_one_dims (bool) – bool, ignore one dimensions when computing the labels (squeeze shapes), default: True
side (str) – str, side of the dot product, default: “right”
- Returns:
str, a string representing the dot product operation
- Return type:
- spark.core.utils.get_einsum_dot_red_string(x, y, ignore_one_dims=True, side='right')[source]#
- Generates labels for a generalized dot reduction product using Einstein notation.
right: (a,b)•(a,b,c,d)=(a,b) - ab,abcd->ab | (a,b,c,d)•(a,b)=(a,b) - abcd,ab->ab left: (c,d)•(a,b,c,d)=(c,d) - cd,abcd->cd | (a,b,c,d)•(c,d)=(c,d) - abcd,ab->ab
- Parameters:
x (tuple[int, Ellipsis]) – tuple[int, …], shape for the first variable of the dot product
y (tuple[int, Ellipsis]) – tuple[int, …], shape for the second variable of the dot product
ignore_one_dims (bool) – bool, ignore one dimensions when computing the labels (squeeze shapes), default: True
side (str) – str, side of the reduction-dot product, default: “right”
- Returns:
str, a string representing the dot product operation
- Return type:
- spark.core.utils.get_einsum_dot_exp_string(x, y, ignore_one_dims=False, side='right')[source]#
- Generates labels for a generalized dot expansion product using Einstein notation.
right: (a,b)•(a,b,c,d)=(a,b,c,d) - ab,abcd->abcd | (a,b,c,d)•(a,b)=(a,b,c,d) - abcd,ab->abcd left: (c,d)•(a,b,c,d)=(a,b,c,d) - cd,abcd->abcd | (c,d)•(a,b,c,d)=(a,b,c,d) - abcd,cd->abcd none: (a,b)•(c,d)=(a,b,c,d) - ab,cd->abcd | (a)•(b,c,d)=(a,b,c,d) - a,bcde->abcde
- Parameters:
x (tuple[int, Ellipsis]) – tuple[int, …], shape for the first variable of the dot product
y (tuple[int, Ellipsis]) – tuple[int, …], shape for the second variable of the dot product
ignore_one_dims (bool) – bool, ignore one dimensions when computing the labels (squeeze shapes), default: True
side (str) – str, side of the expansion-dot, default: “right”
- Returns:
str, a string representing the dot product operation
- Return type:
- spark.core.utils.validate_shape(obj)[source]#
Verifies that the object is broadcastable to a valid shape (tuple of integers). Returns the shape.
- spark.core.utils.validate_list_shape(obj)[source]#
Verifies that the object is broadcastable to a valid list ofshape (a list of tuple of integers). Returns the list of shapes.
- spark.core.utils.is_shape(obj)[source]#
Checks if the obj is broadcastable to a shape.
- Parameters:
obj (Any) – tp.Any: the instance to check.
- Returns:
bool, True if the object is broadcastable to a shape, False otherwise.
- Return type:
- spark.core.utils.is_list_shape(obj)[source]#
Checks if the obj is broadcastable to a shape.
- Parameters:
obj (Any) – tp.Any: the instance to check.
- Returns:
bool, True if the object is broadcastable to a list of shapes, False otherwise.
- Return type:
- spark.core.utils.is_dict_of(obj, value_cls, key_cls=str)[source]#
Check if an object instance is of ‘dict[key_cls, value_cls]’.
- Parameters:
- Returns:
bool, True if the object is an instance of ‘dict[key_cls, value_cls]’, False otherwise.
- Return type:
- spark.core.utils.is_dtype(obj)[source]#
Check if an object is a ‘DTypeLike’.
- Parameters:
obj (tp.Any) – The instance to check.
- Returns:
bool, True if the object is a ‘DTypeLike’, False otherwise.
- Return type:
- spark.core.utils.is_float(obj)[source]#
Check if an object is a ‘DTypeLike’.
- Parameters:
obj (tp.Any) – The instance to check.
- Returns:
bool, True if the object is a ‘DTypeLike’, False otherwise.
- Return type:
- spark.core.utils.ascii_tree(text)[source]#
Build an ASCII tree from indentation-based text. Each level is inferred from leading spaces.
- class spark.core.utils.InheritanceFlags[source]#
Bases:
enum.IntFlagSupport for integer-based Flags
Initialize self. See help(type(self)) for accurate signature.
- class spark.core.utils.InheritanceLeaf[source]#
Leaf object for the InheritanceTree data structure.
- flags: InheritanceFlags = 0[source]#
- parent: InheritanceTree = None[source]#
- class spark.core.utils.InheritanceTree(path=[])[source]#
Tree-like data structure to manage the inheritance status of variables in the Spark Graph Editor.
This data structure is used to link variables with the same names and types for simultaneous updates within the GUI.
- add_leaf(path, type_string='', inheritance_childs=[], flags=0, break_inheritance=False, **kwargs)[source]#
Adds a new leaf to the tree.
- Input:
path: list[str], path to the new leaf node, with the last entry the name of the leaf type_string: str, string representation of the types this variable manages inheritance_childs: list[list[str]]=[], list of children that can inherit from this variable (Note: do not set by hand) flags: InheritanceFlags, 4-bit flags that represent inheritance possibilities (Note: do not set by hand) break_inheritance: bool, boolean flag to disconnect this variable from the inheritance dynamics
- add_branch(path)[source]#
Adds a new branch to the tree.
- Input:
path: list[str], path to the new branch, with the last entry the name of the branch
- validate(inheriting_labels={})[source]#
Validates the flags and the inheritance childs of the tree.
- Parameters:
inheriting_labels (dict)
- Return type:
None
- get_leaf(path)[source]#
Returns the status of the leaf node.
- Input:
path: list[str], path to the leaf node, with the last entry the name of the leaf
- Returns:
InheritanceLeaf, returns the leaf node instance.
- Parameters:
- Return type:
- get_subtree(path)[source]#
Returns a subtree of the leaf node.
- Input:
path: list[str], path to the subtree node, with the last entry the name of the branch
- Returns:
InheritanceTree, returns the branch node instance.
- Parameters:
- Return type: