heterocl.schedule

A module for compute scheduling.

class Schedule(sch, inputs)[source]

Bases: object

Create a compute schedule.

This is a wrapper class for tvm.schedule._Schedule.

Parameters
  • sch (tvm.schedule._Schedule) – The TVM schedule

  • inputs (list of Tensor) – Tensors that are the inputs to the schedule

dataflow_graph(stages=None, level=0, plot=False)[source]

Create a dataflow graph for a given schedule.

Parameters
  • stages (list of Stage, optional) – The finals stages in the graph. If not specified, draw all the stages

  • level (int, optional) – The level of stages to draw. If not specified, draw to the inner-most stages

  • plot (bool, optional) – Whether draw the graph with matplotlib or not

Returns

A directional graph that describes the dataflow

Return type

networkx.DiGraph

fork(tensor, dests, axis=0)[source]

fork tensor to multiple dests

join(srcs, dest=None)[source]

join multiple tensors to single dest

last_stages = OrderedSet()
partition(target, partition_type=0, dim=0, factor=0)[source]

Partition a Tensor into smaller Tensors or even registers

Users can specify the partition type, which includes Complete, Block, and Cyclic. The default type is Complete, which means we completely partition the specified dimension. If Block is specified, the tensor is partitioned into N blocks with equal size. The number N is specified by the factor. Otherwise, if Cyclic is specified, the elements of the tensor is partition in a cyclic manner. For example, if the factor is three, the 1st element will be assigned to the 1st partitioned tensor; the 2nd element will be assigned to the 2nd one; and so on. Finally, if Complete is specified, the factor will be ignored. If dim is set to 0, it means we partition all dimensions.

Parameters
  • target (Tensor) – The tensor to be partitioned

  • partition_type ({Complete, Block, Cyclic}, optional) – The partition type

  • dim (int, optional) – The dimension to be partitioned

  • factor (int, optional) – The partition factor

reshape(target, shape)[source]

Reshape a Tensor to a specified new shape

Parameters
  • target (Tensor) – The tensor to be reshaped

  • shape (tuple of int) – The new shape of the tensor

reuse_at(target, parent, axis, name=None)[source]

Create a reuse buffer reusing the output of current stage

This returns a new tensor representing the reuse buffer. A stage is also built correspondingly. The new stage will be a sub-stage of the parent stage under the specified axis. Thus, the axis must be inside the axis list of the parent stage.

Parameters
  • target (Tensor) – The tensor whose values will be reused

  • parent (Stage) – The stage that reuses the output of the current stage

  • axis (IterVar) – The axis that generates the reuse values

  • name (string, optional) – The name of the reuse buffer

Returns

Return type

Tensor

stage_ops = []
subgraph(inputs, outputs)[source]
to(tensors, dst, src=None, axis=0, stream_type=0, depth=1, name=None)[source]

Stream a list of Tensors to dst devices

Parameters
  • tensors (list of Tensor) – The tensors to be moved

  • dst (device or stage) – The destination of data movement

  • src (device or stage) – The source of data movement

  • axis (axis index) – Move axis-th loop body to xcel scope

  • depth (channel depth) – The streaming channel depth

class Stage(name=None, dtype=None, shape=())[source]

Bases: object

Create a stage in the algorithm.

Stage is needed when an imperative DSL block is not used within any other compute APIs. We can further use the created stage to help us schedule the imperative components within it. It can also be used to describe a higher level of computation hierarchy. For example, we can wrap several compute APIs into a single stage.

Parameters

name (str, optional) – The name of the Stage

Variables
  • stmt_stack (list of list of Stmt) – Store all statements. There are two levels. The outer level is for different scopes of statement. The inner level is for different statements

  • var_dict (dict(str, _Var)) – A dictionary whose key is the name of the variable and the value is the variable itself. This enables users to access a variable inside a Stage via a Python attribute

  • axis_list (list of IterVar) – A list of axes appeared in this Stage

  • has_break (bool) – Set to True if there is a break statement within the stage

  • has_return (bool) – Set to True if there is a return statement within the stage

  • ret_dtype (Type) – The returned data type. Only exists for heterocl.compute

  • for_level (int) – The level of a loop nest where the current statement is.

  • for_id (int) – An index used to label the unnamed axes

  • input_stages (set of Stage) – A set of stages that are the input to the Stage

  • lhs_tensors (set of Tensor) – The tensors that are updated at the left-hand side

  • last_substages (set of Stage) – A set of sub-stages that are last used in the current stage

  • name_with_prefix (str) – The full name of the stage. This is used when two stages at different levels share the same name

Examples

A = hcl.placeholder((10,))
with hcl.Stage():
    A[0] = 5
    with hcl.for_(1, 10) as i:
        A[i] = A[i-1] * 2
axis

Get the axes of the stage.

emit(stmt)[source]

Insert statements to the current stage.

static get_current()[source]

Get the current stage.

static get_len()[source]

Get the level of stages.

pop_stmt()[source]

Create a statement from the statements within current stage.

replace_else(if_stmt, else_stmt)[source]

Add an ELSE or ELIF branch to an existing IF or ELIF branch.