code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
netin, netinfo = process_input(netin, ['C', 'G', 'TO'])
# Set diagonal to 0
netin = set_diagonal(netin, 0)
if axis == 'graphlet' and netinfo['nettype'][-1] == 'u':
triu = np.triu_indices(netinfo['netshape'][0], k=1)
netin = netin[triu[0], triu[1], :]
netin = netin.transpose(... | def binarize_percent(netin, level, sign='pos', axis='time') | Binarizes a network proprtionally. When axis='time' (only one available at the moment) then the top values for each edge time series are considered.
Parameters
----------
netin : array or dict
network (graphlet or contact representation),
level : float
Percent to keep (expressed as dec... | 2.999741 | 2.840442 | 1.056082 |
netin, netinfo = process_input(netin, ['C', 'G', 'TO'])
trajectory = rdp(netin, level)
contacts = []
# Use the trajectory points as threshold
for n in range(trajectory['index'].shape[0]):
if sign == 'pos':
sel = trajectory['trajectory_points'][n][trajectory['trajectory']
... | def binarize_rdp(netin, level, sign='pos', axis='time') | Binarizes a network based on RDP compression.
Parameters
----------
netin : array or dict
Network (graphlet or contact representation),
level : float
Delta parameter which is the tolorated error in RDP compression.
sign : str, default='pos'
States the sign of the thresholdi... | 3.885113 | 3.87973 | 1.001388 |
if threshold_type == 'percent':
netout = binarize_percent(netin, threshold_level, sign, axis)
elif threshold_type == 'magnitude':
netout = binarize_magnitude(netin, threshold_level, sign)
elif threshold_type == 'rdp':
netout = binarize_rdp(netin, threshold_level, sign, axis)
... | def binarize(netin, threshold_type, threshold_level, sign='pos', axis='time') | Binarizes a network, returning the network. General wrapper function for different binarization functions.
Parameters
----------
netin : array or dict
Network (graphlet or contact representation),
threshold_type : str
What type of thresholds to make binarization. Options: 'rdp', 'perce... | 2.25078 | 2.055175 | 1.095177 |
inputtype = checkInput(netIn)
# Convert TN to G representation
if inputtype == 'TN' and 'TN' in allowedformats and outputformat != 'TN':
G = netIn.df_to_array()
netInfo = {'nettype': netIn.nettype, 'netshape': netIn.netshape}
elif inputtype == 'TN' and 'TN' in allowedformats and out... | def process_input(netIn, allowedformats, outputformat='G') | Takes input network and checks what the input is.
Parameters
----------
netIn : array, dict, or TemporalNetwork
Network (graphlet, contact or object)
allowedformats : str
Which format of network objects that are allowed. Options: 'C', 'TN', 'G'.
outputformat: str, default=G
... | 2.646709 | 2.436064 | 1.086469 |
communityID = np.array(communityID)
cid_shape = communityID.shape
if len(cid_shape) > 1:
communityID = communityID.flatten()
new_communityID = np.zeros(len(communityID))
for i, n in enumerate(np.unique(communityID)):
new_communityID[communityID == n] = i
if len(cid_shape) > ... | def clean_community_indexes(communityID) | Takes input of community assignments. Returns reindexed community assignment by using smallest numbers possible.
Parameters
----------
communityID : array-like
list or array of integers. Output from community detection algorithems.
Returns
-------
new_communityID : array
clea... | 2.007231 | 2.093854 | 0.95863 |
d = collections.OrderedDict()
for c in C['contacts']:
ct = tuple(c)
if ct in d:
d[ct] += 1
else:
d[ct] = 1
new_contacts = []
new_values = []
for (key, value) in d.items():
new_values.append(value)
new_contacts.append(key)
C_ou... | def multiple_contacts_get_values(C) | Given an contact representation with repeated contacts, this function removes duplicates and creates a value
Parameters
----------
C : dict
contact representation with multiple repeated contacts.
Returns
-------
:C_out: dict
Contact representation with duplicate contacts re... | 2.388001 | 2.190151 | 1.090336 |
if len(df) > 0:
idx = np.array(list(map(list, df.values)))
G = np.zeros([netshape[0], netshape[0], netshape[1]])
if idx.shape[1] == 3:
if nettype[-1] == 'u':
idx = np.vstack([idx, idx[:, [1, 0, 2]]])
idx = idx.astype(int)
G[idx[:, 0], ... | def df_to_array(df, netshape, nettype) | Returns a numpy array (snapshot representation) from thedataframe contact list
Parameters:
df : pandas df
pandas df with columns, i,j,t.
netshape : tuple
network shape, format: (node, time)
nettype : str
'wu', 'wd', 'bu', 'bd'
Returns:
--------
... | 1.946345 | 1.890199 | 1.029704 |
if distance_func_name == 'default' and netinfo['nettype'][0] == 'b':
print('Default distance funciton specified. As network is binary, using Hamming')
distance_func_name = 'hamming'
elif distance_func_name == 'default' and netinfo['nettype'][0] == 'w':
distance_func_name = 'euclide... | def check_distance_funciton_input(distance_func_name, netinfo) | Funciton checks distance_func_name, if it is specified as 'default'. Then given the type of the network selects a default distance function.
Parameters
----------
distance_func_name : str
distance function name.
netinfo : dict
the output of utils.process_input
Returns
-------... | 3.215411 | 3.008744 | 1.068689 |
path = tenetopath[0] + '/data/parcellation/' + parcellation_name + '.csv'
parc = np.loadtxt(path, skiprows=1, delimiter=',', usecols=[1, 2, 3])
return parc | def load_parcellation_coords(parcellation_name) | Loads coordinates of included parcellations.
Parameters
----------
parcellation_name : str
options: 'gordon2014_333', 'power2012_264', 'shen2013_278'.
Returns
-------
parc : array
parcellation cordinates | 4.322772 | 5.015762 | 0.861837 |
if isinstance(parcellation, str):
parcin = ''
if '+' in parcellation:
parcin = parcellation
parcellation = parcellation.split('+')[0]
if '+OH' in parcin:
subcortical = True
else:
subcortical = None
if '+SUIT' in parcin:
... | def make_parcellation(data_path, parcellation, parc_type=None, parc_params=None) | Performs a parcellation which reduces voxel space to regions of interest (brain data).
Parameters
----------
data_path : str
Path to .nii image.
parcellation : str
Specify which parcellation that you would like to use. For MNI: 'gordon2014_333', 'power2012_264', For TAL: 'shen2013_278'... | 2.531412 | 2.223121 | 1.138675 |
steps = (1.0/(N-1)) * (stop - start)
if np.isscalar(steps):
return steps*np.arange(N) + start
else:
return steps[:, None]*np.arange(N) + start[:, None] | def create_traj_ranges(start, stop, N) | Fills in the trajectory range.
# Adapted from https://stackoverflow.com/a/40624614 | 3.039685 | 2.983643 | 1.018783 |
if not calc:
calc = ''
else:
calc = '_' + calc
if not community:
community = ''
else:
community = 'community'
if 'community' in calc and 'community' in community:
community = ''
if calc == 'community_avg' or calc == 'community_pairs':
communi... | def get_dimord(measure, calc=None, community=None) | Get the dimension order of a network measure.
Parameters
----------
measure : str
Name of funciton in teneto.networkmeasures.
calc : str, default=None
Calc parameter for the function
community : bool, default=None
If not null, then community property is assumed to be believ... | 3.66602 | 3.49872 | 1.047818 |
newnetwork = tnet.network.copy()
newnetwork['i'] = (tnet.network['i']) + \
((tnet.netshape[0]) * (tnet.network['t']))
newnetwork['j'] = (tnet.network['j']) + \
((tnet.netshape[0]) * (tnet.network['t']))
if 'weight' not in newnetwork.columns:
newnetwork['weight'] = 1
newn... | def create_supraadjacency_matrix(tnet, intersliceweight=1) | Returns a supraadjacency matrix from a temporal network structure
Parameters
--------
tnet : TemporalNetwork
Temporal network (any network type)
intersliceweight : int
Weight that links the same node from adjacent time-points
Returns
--------
supranet : dataframe
Su... | 2.527046 | 2.362498 | 1.06965 |
if t is not None:
df = get_network_when(df, t=t)
if 'weight' in df.columns:
nxobj = nx.from_pandas_edgelist(
df, source='i', target='j', edge_attr='weight')
else:
nxobj = nx.from_pandas_edgelist(df, source='i', target='j')
return nxobj | def tnet_to_nx(df, t=None) | Creates undirected networkx object | 2.394759 | 2.319137 | 1.032608 |
r
tnet = process_input(tnet, ['C', 'G', 'TN'], 'TN')
# Divide resolution by the number of timepoints
resolution = resolution / tnet.T
supranet = create_supraadjacency_matrix(
tnet, intersliceweight=intersliceweight)
if negativeedge == 'ignore':
supranet = supranet[supranet['weig... | def temporal_louvain(tnet, resolution=1, intersliceweight=1, n_iter=100, negativeedge='ignore', randomseed=None, consensus_threshold=0.5, temporal_consensus=True, njobs=1) | r"""
Louvain clustering for a temporal network.
Parameters
-----------
tnet : array, dict, TemporalNetwork
Input network
resolution : int
resolution of Louvain clustering ($\gamma$)
intersliceweight : int
interslice weight of multilayer clustering ($\omega$). Must be pos... | 3.314098 | 3.313821 | 1.000083 |
r
com_membership = np.array(com_membership)
D = []
for i in range(com_membership.shape[0]):
for j in range(i+1, com_membership.shape[0]):
con = np.sum((com_membership[i, :] - com_membership[j, :])
== 0, axis=-1) / com_membership.shape[-1]
twhere ... | def make_consensus_matrix(com_membership, th=0.5) | r"""
Makes the consensus matrix
.
Parameters
----------
com_membership : array
Shape should be node, time, iteration.
th : float
threshold to cancel noisey edges
Returns
-------
D : array
consensus matrix | 3.630805 | 3.608872 | 1.006077 |
r
com_membership = np.array(com_membership)
# make first indicies be between 0 and 1.
com_membership[:, 0] = clean_community_indexes(com_membership[:, 0])
# loop over all timepoints, get jacccard distance in greedy manner for largest community to time period before
for t in range(1, com_members... | def make_temporal_consensus(com_membership) | r"""
Matches community labels accross time-points
Jaccard matching is in a greedy fashiong. Matching the largest community at t with the community at t-1.
Parameters
----------
com_membership : array
Shape should be node, time.
Returns
-------
D : array
temporal cons... | 3.279249 | 3.015183 | 1.087579 |
# Preallocate
flex = np.zeros(communities.shape[0])
# Go from the second time point to last, compare with time-point before
for t in range(1, communities.shape[1]):
flex[communities[:, t] != communities[:, t-1]] += 1
# Normalize
flex = flex / (communities.shape[1] - 1)
return f... | def flexibility(communities) | Amount a node changes community
Parameters
----------
communities : array
Community array of shape (node,time)
Returns
--------
flex : array
Size with the flexibility of each node.
Notes
-----
Flexbility calculates the number of times a node switches its community ... | 4.733977 | 3.730659 | 1.268938 |
if '/' in fname:
split = fname.split('/')
dirnames = '/'.join(split[:-1]) + '/'
fname = split[-1]
else:
dirnames = ''
tags = [tag for tag in fname.split('_') if '-' in tag]
fname_head = '_'.join(tags)
fileformat = '.' + '.'.join(fname.split('.')[1:])
return d... | def drop_bids_suffix(fname) | Given a filename sub-01_run-01_preproc.nii.gz, it will return ['sub-01_run-01', '.nii.gz']
Parameters
----------
fname : str
BIDS filename with suffice. Directories should not be included.
Returns
-------
fname_head : str
BIDS filename with
fileformat : str
The fil... | 3.282015 | 3.044714 | 1.077939 |
if index_col:
index_col = 0
else:
index_col = None
if header:
header = 0
else:
header = None
df = pd.read_csv(fname, header=header, index_col=index_col, sep='\t')
if return_meta:
json_fname = fname.replace('tsv', 'json')
meta = pd.read_json(... | def load_tabular_file(fname, return_meta=False, header=True, index_col=True) | Given a file name loads as a pandas data frame
Parameters
----------
fname : str
file name and path. Must be tsv.
return_meta :
header : bool (default True)
if there is a header in the tsv file, true will use first row in file.
index_col : bool (default None)
if there i... | 1.917804 | 1.945934 | 0.985544 |
if allowedfileformats == 'default':
allowedfileformats = ['.tsv', '.nii.gz']
for f in allowedfileformats:
fname = fname.split(f)[0]
fname += '.json'
if os.path.exists(fname):
with open(fname) as fs:
sidecar = json.load(fs)
else:
sidecar = {}
if 'f... | def get_sidecar(fname, allowedfileformats='default') | Loads sidecar or creates one | 2.457534 | 2.412616 | 1.018618 |
relfun = []
threshold = []
for ec in exclusion_criteria:
if ec[0:2] == '>=':
relfun.append(np.greater_equal)
threshold.append(float(ec[2:]))
elif ec[0:2] == '<=':
relfun.append(np.less_equal)
threshold.append(float(ec[2:]))
elif ec... | def process_exclusion_criteria(exclusion_criteria) | Parses an exclusion critera string to get the function and threshold.
Parameters
----------
exclusion_criteria : list
list of strings where each string is of the format [relation][threshold]. E.g. \'<0.5\' or \'>=1\'
Returns
-------
relfun : list
list of numpy f... | 2.13075 | 1.827252 | 1.166095 |
if tnet is not None and paths is not None:
raise ValueError('Only network or path input allowed.')
if tnet is None and paths is None:
raise ValueError('No input.')
# if shortest paths are not calculated, calculate them
if tnet is not None:
paths = shortest_temporal_path(tne... | def reachability_latency(tnet=None, paths=None, rratio=1, calc='global') | Reachability latency. This is the r-th longest temporal path.
Parameters
---------
data : array or dict
Can either be a network (graphlet or contact), binary unidrected only. Alternative can be a paths dictionary (output of teneto.networkmeasure.shortest_temporal_path)
rratio: float (default... | 3.028404 | 2.779521 | 1.089542 |
# make sure the static and temporal communities have the same number of nodes
if staticcommunities.shape[0] != temporalcommunities.shape[0]:
raise ValueError(
'Temporal and static communities have different dimensions')
alleg = allegiance(temporalcommunities)
Rcoeff = np.z... | def recruitment(temporalcommunities, staticcommunities) | Calculates recruitment coefficient for each node. Recruitment coefficient is the average probability of nodes from the
same static communities being in the same temporal communities at other time-points or during different tasks.
Parameters:
------------
temporalcommunities : array
temporal ... | 3.982296 | 3.488743 | 1.14147 |
# make sure the static and temporal communities have the same number of nodes
if staticcommunities.shape[0] != temporalcommunities.shape[0]:
raise ValueError(
'Temporal and static communities have different dimensions')
alleg = allegiance(temporalcommunities)
Icoeff = np.zero... | def integration(temporalcommunities, staticcommunities) | Calculates the integration coefficient for each node. Measures the average probability
that a node is in the same community as nodes from other systems.
Parameters:
------------
temporalcommunities : array
temporal communities vector (node,time)
staticcommunities : array
... | 4.464949 | 3.614507 | 1.235286 |
# Process input
tnet = process_input(tnet, ['C', 'G', 'TN'], 'TN')
if tnet.nettype[0] == 'w':
print('WARNING: assuming connections to be binary when computing intercontacttimes')
# Each time series is padded with a 0 at the start and end. Then t[0:-1]-[t:].
# Then discard the noninfo... | def intercontacttimes(tnet) | Calculates the intercontacttimes of each edge in a network.
Parameters
-----------
tnet : array, dict
Temporal network (craphlet or contact). Nettype: 'bu', 'bd'
Returns
---------
contacts : dict
Intercontact times as numpy array in dictionary. contacts['intercontacttimes']
... | 3.052711 | 2.980393 | 1.024264 |
# Create report directory
if not os.path.exists(sdir):
os.makedirs(sdir)
# Add a slash to file directory if not included to avoid DirNameFleName
# instead of DirName/FileName being creaated
if sdir[-1] != '/':
sdir += '/'
report_html = '<html><body>'
if 'method' in r... | def gen_report(report, sdir='./', report_name='report.html') | Generates report of derivation and postprocess steps in teneto.derive | 2.728611 | 2.705617 | 1.008499 |
if init == 1:
self.history = []
self.history.append([fname, fargs]) | def add_history(self, fname, fargs, init=0) | Adds a processing step to TenetoBIDS.history. | 3.481344 | 3.045114 | 1.143256 |
mods = [(m.__name__, m.__version__)
for m in sys.modules.values() if m if hasattr(m, '__version__')]
with open(dirname + '/requirements.txt', 'w') as f:
for m in mods:
m = list(m)
if not isinstance(m[1], str):
m[1] ... | def export_history(self, dirname) | Exports TenetoBIDShistory.py, tenetoinfo.json, requirements.txt (modules currently imported) to dirname
Parameters
---------
dirname : str
directory to export entire TenetoBIDS history. | 3.08671 | 2.335047 | 1.321905 |
if not njobs:
njobs = self.njobs
self.add_history(inspect.stack()[0][3], locals(), 1)
files = self.get_selected_files(quiet=1)
confound_files = self.get_selected_files(quiet=1, pipeline='confound')
if confound_files:
confounds_exist = True
... | def derive_temporalnetwork(self, params, update_pipeline=True, tag=None, njobs=1, confound_corr_report=True) | Derive time-varying connectivity on the selected files.
Parameters
----------
params : dict.
See teneto.timeseries.derive_temporalnetwork for the structure of the param dictionary. Assumes dimord is time,node (output of other TenetoBIDS funcitons)
update_pipeline : bool
... | 3.936603 | 3.678212 | 1.070249 |
if not njobs:
njobs = self.njobs
self.add_history(inspect.stack()[0][3], locals(), 1)
files = self.get_selected_files(quiet=1)
R_group = []
with ProcessPoolExecutor(max_workers=njobs) as executor:
job = {executor.submit(
self._ru... | def make_functional_connectivity(self, njobs=None, returngroup=False, file_hdr=None, file_idx=None) | Makes connectivity matrix for each of the subjects.
Parameters
----------
returngroup : bool, default=False
If true, returns the group average connectivity matrix.
njobs : int
How many parallel jobs to run
file_idx : bool
Default False, true i... | 3.989188 | 4.038682 | 0.987745 |
file_name = f.split('/')[-1].split('.')[0]
if tag != '':
tag = '_' + tag
if suffix:
file_name, _ = drop_bids_suffix(file_name)
save_name = file_name + tag
save_name += '_' + suffix
else:
save_name = file_name + tag
... | def _save_namepaths_bids_derivatives(self, f, tag, save_directory, suffix=None) | Creates output directory and output name
Paramters
---------
f : str
input files, includes the file bids_suffix
tag : str
what should be added to f in the output file.
save_directory : str
additional directory that the output file should go in... | 3.698952 | 3.384067 | 1.093049 |
if not self.pipeline:
print('Please set pipeline first.')
self.get_pipeline_alternatives(quiet)
else:
if tag == 'sub':
datapath = self.BIDS_dir + '/derivatives/' + self.pipeline + '/'
tag_alternatives = [
f.... | def get_tags(self, tag, quiet=1) | Returns which tag alternatives can be identified in the BIDS derivatives structure. | 2.570588 | 2.368181 | 1.085469 |
if not os.path.exists(self.BIDS_dir + '/derivatives/'):
print('Derivative directory not found. Is the data preprocessed?')
else:
pipeline_alternatives = os.listdir(self.BIDS_dir + '/derivatives/')
if quiet == 0:
print('Derivative alternatives:... | def get_pipeline_alternatives(self, quiet=0) | The pipeline are the different outputs that are placed in the ./derivatives directory.
get_pipeline_alternatives gets those which are found in the specified BIDS directory structure. | 3.615895 | 3.293552 | 1.097871 |
if not self.pipeline:
print('Please set pipeline first.')
self.get_pipeline_alternatives()
else:
pipeline_subdir_alternatives = []
for s in self.bids_tags['sub']:
derdir_files = os.listdir(
self.BIDS_dir + '/der... | def get_pipeline_subdir_alternatives(self, quiet=0) | Note
-----
This function currently returns the wrong folders and will be fixed in the future.
This function should return ./derivatives/pipeline/sub-xx/[ses-yy/][func/]/pipeline_subdir
But it does not care about ses-yy at the moment. | 2.786347 | 2.677234 | 1.040756 |
self.add_history(inspect.stack()[0][3], locals(), 1)
if isinstance(confound, str):
confound = [confound]
if isinstance(exclusion_criteria, str):
exclusion_criteria = [exclusion_criteria]
if isinstance(confound_stat, str):
confound_stat = [conf... | def set_exclusion_file(self, confound, exclusion_criteria, confound_stat='mean') | Excludes subjects given a certain exclusion criteria.
Parameters
----------
confound : str or list
string or list of confound name(s) from confound files
exclusion_criteria : str or list
for each confound, an exclusion_criteria should be expresse... | 2.879331 | 2.792995 | 1.030912 |
if not njobs:
njobs = self.njobs
self.add_history(inspect.stack()[0][3], locals(), 1)
parc_name = parcellation.split('_')[0].lower()
# Check confounds have been specified
if not self.confounds and removeconfounds:
raise ValueError(
... | def make_parcellation(self, parcellation, parc_type=None, parc_params=None, network='defaults', update_pipeline=True, removeconfounds=False, tag=None, njobs=None, clean_params=None, yeonetworkn=None) | Reduces the data from voxel to parcellation space. Files get saved in a teneto folder in the derivatives with a roi tag at the end.
Parameters
-----------
parcellation : str
specify which parcellation that you would like to use. For MNI: 'power2012_264', 'gordon2014_333'. TAL: 'she... | 5.05477 | 4.756377 | 1.062735 |
if not njobs:
njobs = self.njobs
self.add_history(inspect.stack()[0][3], locals(), 1)
if not tag:
tag = ''
else:
tag = 'desc-' + tag
if community_type == 'temporal':
files = self.get_selected_files(quiet=True)
... | def communitydetection(self, community_detection_params, community_type='temporal', tag=None, file_hdr=False, file_idx=False, njobs=None) | Calls temporal_louvain_with_consensus on connectivity data
Parameters
----------
community_detection_params : dict
kwargs for detection. See teneto.communitydetection.louvain.temporal_louvain_with_consensus
community_type : str
Either 'temporal' or 'static'. If ... | 5.442759 | 5.180504 | 1.050623 |
if not njobs:
njobs = self.njobs
self.add_history(inspect.stack()[0][3], locals(), 1)
if not self.confounds and not confounds:
raise ValueError(
'Specified confounds are not found. Make sure that you have run self.set_confunds([\'Confound1\',\'Co... | def removeconfounds(self, confounds=None, clean_params=None, transpose=None, njobs=None, update_pipeline=True, overwrite=True, tag=None) | Removes specified confounds using nilearn.signal.clean
Parameters
----------
confounds : list
List of confounds. Can be prespecified in set_confounds
clean_params : dict
Dictionary of kawgs to pass to nilearn.signal.clean
transpose : bool (default False)
... | 4.260266 | 3.968511 | 1.073518 |
if not njobs:
njobs = self.njobs
self.add_history(inspect.stack()[0][3], locals(), 1)
# measure can be string or list
if isinstance(measure, str):
measure = [measure]
# measure_params can be dictionaary or list of dictionaries
if isinstan... | def networkmeasures(self, measure=None, measure_params=None, tag=None, njobs=None) | Calculates a network measure
For available funcitons see: teneto.networkmeasures
Parameters
----------
measure : str or list
Mame of function(s) from teneto.networkmeasures that will be run.
measure_params : dict or list of dctionaries)
Containing kwar... | 3.711886 | 3.895005 | 0.952986 |
self.add_history(inspect.stack()[0][3], locals(), 1)
if not os.path.exists(self.BIDS_dir + '/derivatives/' + confound_pipeline):
print('Specified direvative directory not found.')
self.get_pipeline_alternatives()
else:
# Todo: perform check that pip... | def set_confound_pipeline(self, confound_pipeline) | There may be times when the pipeline is updated (e.g. teneto) but you want the confounds from the preprocessing pipieline (e.g. fmriprep).
To do this, you set the confound_pipeline to be the preprocessing pipeline where the confound files are.
Parameters
----------
confound_pipeline : ... | 8.540937 | 8.295675 | 1.029565 |
self.add_history(inspect.stack()[0][3], locals(), 1)
self.bids_suffix = bids_suffix | def set_bids_suffix(self, bids_suffix) | The last analysis step is the final tag that is present in files. | 6.845271 | 6.459279 | 1.059758 |
self.add_history(inspect.stack()[0][3], locals(), 1)
if not os.path.exists(self.BIDS_dir + '/derivatives/' + pipeline):
print('Specified direvative directory not found.')
self.get_pipeline_alternatives()
else:
# Todo: perform check that pipeline is va... | def set_pipeline(self, pipeline) | Specify the pipeline. See get_pipeline_alternatives to see what are avaialble. Input should be a string. | 10.255918 | 7.598383 | 1.34975 |
print('--- DATASET INFORMATION ---')
print('--- Subjects ---')
if self.raw_data_exists:
if self.BIDS.get_subjects():
print('Number of subjects (in dataset): ' +
str(len(self.BIDS.get_subjects())))
print('Subjects (in da... | def print_dataset_summary(self) | Prints information about the the BIDS data and the files currently selected. | 1.947178 | 1.886012 | 1.032431 |
if fname[-4:] != '.pkl':
fname += '.pkl'
with open(fname, 'rb') as f:
tnet = pickle.load(f)
if reload_object:
reloadnet = teneto.TenetoBIDS(tnet.BIDS_dir, pipeline=tnet.pipeline, pipeline_subdir=tnet.pipeline_subdir, bids_tags=tnet.bids_tags, bids_su... | def load_frompickle(cls, fname, reload_object=False) | Loaded saved instance of
fname : str
path to pickle object (output of TenetoBIDS.save_aspickle)
reload_object : bool (default False)
reloads object by calling teneto.TenetoBIDS (some information lost, for development)
Returns
-------
self :
... | 4.056008 | 3.802735 | 1.066603 |
if datatype == 'temporalnetwork' and not measure:
raise ValueError(
'When datatype is temporalnetwork, \'measure\' must also be specified.')
self.add_history(inspect.stack()[0][3], locals(), 1)
data_list = []
trialinfo_list = []
for s in se... | def load_data(self, datatype='tvc', tag=None, measure='') | Function loads time-varying connectivity estimates created by the TenetoBIDS.derive function.
The default grabs all data (in numpy arrays) in the teneto/../func/tvc/ directory.
Data is placed in teneto.tvc_data_
Parameters
----------
datatype : str
\'tvc\', \'parcel... | 5.143754 | 4.640587 | 1.108427 |
'''
Returns temporal closeness centrality per node.
Parameters
-----------
Input should be *either* tnet or paths.
data : array or dict
Temporal network input (graphlet or contact). nettype: 'bu', 'bd'.
paths : pandas dataframe
Output of TenetoBIDS.networkmeasure.shorte... | def temporal_closeness_centrality(tnet=None, paths=None) | Returns temporal closeness centrality per node.
Parameters
-----------
Input should be *either* tnet or paths.
data : array or dict
Temporal network input (graphlet or contact). nettype: 'bu', 'bd'.
paths : pandas dataframe
Output of TenetoBIDS.networkmeasure.shortest_temporal_... | 6.580522 | 2.403835 | 2.73751 |
if isinstance(reducer, str):
reducer = REDUCER_DICT[reducer]
flat_dict = {}
def _flatten(d, parent=None):
for key, value in six.viewitems(d):
flat_key = reducer(parent, key)
if isinstance(value, Mapping):
_flatten(value, flat_key)
els... | def flatten(d, reducer='tuple', inverse=False) | Flatten dict-like object.
Parameters
----------
d: dict-like object
The dict that will be flattened.
reducer: {'tuple', 'path', function} (default: 'tuple')
The key joining method. If a function is given, the function will be
used to reduce.
'tuple': The resulting key wi... | 2.311543 | 2.538382 | 0.910636 |
assert keys
key = keys[0]
if len(keys) == 1:
if key in d:
raise ValueError("duplicated key '{}'".format(key))
d[key] = value
return
d = d.setdefault(key, {})
nested_set_dict(d, keys[1:], value) | def nested_set_dict(d, keys, value) | Set a value to a sequence of nested keys
Parameters
----------
d: Mapping
keys: Sequence[str]
value: Any | 2.346478 | 2.699527 | 0.869218 |
if isinstance(splitter, str):
splitter = SPLITTER_DICT[splitter]
unflattened_dict = {}
for flat_key, value in six.viewitems(d):
if inverse:
flat_key, value = value, flat_key
key_tuple = splitter(flat_key)
nested_set_dict(unflattened_dict, key_tuple, value)
... | def unflatten(d, splitter='tuple', inverse=False) | Unflatten dict-like object.
Parameters
----------
d: dict-like object
The dict that will be unflattened.
splitter: {'tuple', 'path', function} (default: 'tuple')
The key splitting method. If a function is given, the function will be
used to split.
'tuple': Use each eleme... | 2.80591 | 3.133884 | 0.895346 |
if not HAS_MATPLOTLIB:
raise ImportError("matplotlib package is required for plotting "
"supports.")
fig, ax = plt.subplots()
plot_pianoroll(ax, track.pianoroll, track.is_drum, beat_resolution,
downbeats, preset=preset, cmap=cmap, xtick=xtick,
... | def plot_track(track, filename=None, beat_resolution=None, downbeats=None,
preset='default', cmap='Blues', xtick='auto', ytick='octave',
xticklabel=True, yticklabel='auto', tick_loc=None,
tick_direction='in', label='both', grid='both',
grid_linestyle=':', grid... | Plot the pianoroll or save a plot of the pianoroll.
Parameters
----------
filename :
The filename to which the plot is saved. If None, save nothing.
beat_resolution : int
The number of time steps used to represent a beat. Required and only
effective when `xtick` is 'beat'.
d... | 1.866401 | 1.99786 | 0.9342 |
if track is not None:
if not isinstance(track, Track):
raise TypeError("`track` must be a pypianoroll.Track instance.")
track.check_validity()
else:
track = Track(pianoroll, program, is_drum, name)
self.tracks.append(track) | def append_track(self, track=None, pianoroll=None, program=0, is_drum=False,
name='unknown') | Append a multitrack.Track instance to the track list or create a new
multitrack.Track object and append it to the track list.
Parameters
----------
track : pianoroll.Track
A :class:`pypianoroll.Track` instance to be appended to the track
list.
pianoroll :... | 2.723031 | 2.831453 | 0.961708 |
# tracks
for track in self.tracks:
if not isinstance(track, Track):
raise TypeError("`tracks` must be a list of "
"`pypianoroll.Track` instances.")
track.check_validity()
# tempo
if not isinstance(self.tempo... | def check_validity(self) | Raise an error if any invalid attribute found.
Raises
------
TypeError
If an attribute has an invalid type.
ValueError
If an attribute has an invalid value (of the correct type). | 1.885514 | 1.867777 | 1.009497 |
for track in self.tracks:
track.clip(lower, upper) | def clip(self, lower=0, upper=127) | Clip the pianorolls of all tracks by the given lower and upper bounds.
Parameters
----------
lower : int or float
The lower bound to clip the pianorolls. Defaults to 0.
upper : int or float
The upper bound to clip the pianorolls. Defaults to 127. | 5.518245 | 4.178946 | 1.320487 |
active_length = 0
for track in self.tracks:
now_length = track.get_active_length()
if active_length < track.get_active_length():
active_length = now_length
return active_length | def get_active_length(self) | Return the maximum active length (i.e., without trailing silence) among
the pianorolls of all tracks. The unit is time step.
Returns
-------
active_length : int
The maximum active length (i.e., without trailing silence) among the
pianorolls of all tracks. The uni... | 3.03312 | 3.09052 | 0.981427 |
lowest, highest = self.tracks[0].get_active_pitch_range()
if len(self.tracks) > 1:
for track in self.tracks[1:]:
low, high = track.get_active_pitch_range()
if low < lowest:
lowest = low
if high > highest:
... | def get_active_pitch_range(self) | Return the active pitch range of the pianorolls of all tracks as a tuple
(lowest, highest).
Returns
-------
lowest : int
The lowest active pitch among the pianorolls of all tracks.
highest : int
The lowest highest pitch among the pianorolls of all tracks. | 1.955238 | 1.833228 | 1.066555 |
if self.downbeat is None:
return []
downbeat_steps = np.nonzero(self.downbeat)[0].tolist()
return downbeat_steps | def get_downbeat_steps(self) | Return the indices of time steps that contain downbeats.
Returns
-------
downbeat_steps : list
The indices of time steps that contain downbeats. | 3.369263 | 3.294505 | 1.022692 |
empty_track_indices = [idx for idx, track in enumerate(self.tracks)
if not np.any(track.pianoroll)]
return empty_track_indices | def get_empty_tracks(self) | Return the indices of tracks with empty pianorolls.
Returns
-------
empty_track_indices : list
The indices of tracks with empty pianorolls. | 4.312263 | 3.439944 | 1.253585 |
max_length = 0
for track in self.tracks:
if max_length < track.pianoroll.shape[0]:
max_length = track.pianoroll.shape[0]
return max_length | def get_max_length(self) | Return the maximum length of the pianorolls along the time axis (in
time step).
Returns
-------
max_length : int
The maximum length of the pianorolls along the time axis (in time
step). | 2.647152 | 2.661066 | 0.994771 |
stacked = self.get_stacked_pianoroll()
if mode == 'any':
merged = np.any(stacked, axis=2)
elif mode == 'sum':
merged = np.sum(stacked, axis=2)
elif mode == 'max':
merged = np.max(stacked, axis=2)
else:
raise ValueError("`m... | def get_merged_pianoroll(self, mode='sum') | Return the merged pianoroll.
Parameters
----------
mode : {'sum', 'max', 'any'}
A string that indicates the merging strategy to apply along the
track axis. Default to 'sum'.
- In 'sum' mode, the merged pianoroll is the sum of all the
pianorolls... | 2.118396 | 2.18024 | 0.971634 |
multitrack = deepcopy(self)
multitrack.pad_to_same()
stacked = np.stack([track.pianoroll for track in multitrack.tracks], -1)
return stacked | def get_stacked_pianoroll(self) | Return a stacked multitrack pianoroll. The shape of the return array is
(n_time_steps, 128, n_tracks).
Returns
-------
stacked : np.ndarray, shape=(n_time_steps, 128, n_tracks)
The stacked pianoroll. | 4.018862 | 4.567359 | 0.879909 |
def reconstruct_sparse(target_dict, name):
return csc_matrix((target_dict[name+'_csc_data'],
target_dict[name+'_csc_indices'],
target_dict[name+'_csc_indptr']),
shape=target_dict[name+'_... | def load(self, filename) | Load a npz file. Supports only files previously saved by
:meth:`pypianoroll.Multitrack.save`.
Notes
-----
Attribute values will all be overwritten.
Parameters
----------
filename : str
The name of the npz file to be loaded. | 2.775418 | 2.725425 | 1.018343 |
if mode not in ('max', 'sum', 'any'):
raise ValueError("`mode` must be one of {'max', 'sum', 'any'}.")
merged = self[track_indices].get_merged_pianoroll(mode)
merged_track = Track(merged, program, is_drum, name)
self.append_track(merged_track)
if remove_me... | def merge_tracks(self, track_indices=None, mode='sum', program=0,
is_drum=False, name='merged', remove_merged=False) | Merge pianorolls of the tracks specified by `track_indices`. The merged
track will have program number as given by `program` and drum indicator
as given by `is_drum`. The merged track will be appended at the end of
the track list.
Parameters
----------
track_indices : li... | 2.741085 | 2.752048 | 0.996016 |
max_length = self.get_max_length()
for track in self.tracks:
if track.pianoroll.shape[0] < max_length:
track.pad(max_length - track.pianoroll.shape[0]) | def pad_to_same(self) | Pad shorter pianorolls with zeros at the end along the time axis to
make the resulting pianoroll lengths the same as the maximum pianoroll
length among all the tracks. | 2.965192 | 2.088281 | 1.41992 |
pm = pretty_midi.PrettyMIDI(filename)
self.parse_pretty_midi(pm, **kwargs) | def parse_midi(self, filename, **kwargs) | Parse a MIDI file.
Parameters
----------
filename : str
The name of the MIDI file to be parsed.
**kwargs:
See :meth:`pypianoroll.Multitrack.parse_pretty_midi` for full
documentation. | 3.80752 | 4.499307 | 0.846246 |
if isinstance(track_indices, int):
track_indices = [track_indices]
self.tracks = [track for idx, track in enumerate(self.tracks)
if idx not in track_indices] | def remove_tracks(self, track_indices) | Remove tracks specified by `track_indices`.
Parameters
----------
track_indices : list
The indices of the tracks to be removed. | 2.08517 | 2.51969 | 0.82755 |
def update_sparse(target_dict, sparse_matrix, name):
csc = csc_matrix(sparse_matrix)
target_dict[name+'_csc_data'] = csc.data
target_dict[name+'_csc_indices'] = csc.indices
target_dict[name+'_csc_indptr'] = csc.indptr
target_dict[... | def save(self, filename, compressed=True) | Save the multitrack pianoroll to a (compressed) npz file, which can be
later loaded by :meth:`pypianoroll.Multitrack.load`.
Notes
-----
To reduce the file size, the pianorolls are first converted to instances
of scipy.sparse.csc_matrix, whose component arrays are then collected
... | 2.42716 | 2.164958 | 1.121112 |
self.check_validity()
pm = pretty_midi.PrettyMIDI(initial_tempo=self.tempo[0])
# TODO: Add downbeat support -> time signature change events
# TODO: Add tempo support -> tempo change events
if constant_tempo is None:
constant_tempo = self.tempo[0]
tim... | def to_pretty_midi(self, constant_tempo=None, constant_velocity=100) | Convert to a :class:`pretty_midi.PrettyMIDI` instance.
Notes
-----
- Only constant tempo is supported by now.
- The velocities of the converted pianorolls are clipped to [0, 127],
i.e. values below 0 and values beyond 127 are replaced by 127 and 0,
respectively.
... | 2.652399 | 2.666023 | 0.99489 |
for track in self.tracks:
if not track.is_drum:
track.transpose(semitone) | def transpose(self, semitone) | Transpose the pianorolls of all tracks by a number of semitones, where
positive values are for higher key, while negative values are for lower
key. The drum tracks are ignored.
Parameters
----------
semitone : int
The number of semitones to transpose the pianorolls. | 4.447332 | 3.724059 | 1.194216 |
active_length = self.get_active_length()
for track in self.tracks:
track.pianoroll = track.pianoroll[:active_length] | def trim_trailing_silence(self) | Trim the trailing silences of the pianorolls of all tracks. Trailing
silences are considered globally. | 4.846605 | 3.522719 | 1.375813 |
if not filename.endswith(('.mid', '.midi', '.MID', '.MIDI')):
filename = filename + '.mid'
pm = self.to_pretty_midi()
pm.write(filename) | def write(self, filename) | Write the multitrack pianoroll to a MIDI file.
Parameters
----------
filename : str
The name of the MIDI file to which the multitrack pianoroll is
written. | 4.215076 | 3.44287 | 1.224292 |
if not isinstance(arr, np.ndarray):
raise TypeError("`arr` must be of np.ndarray type")
if not (np.issubdtype(arr.dtype, np.bool_)
or np.issubdtype(arr.dtype, np.number)):
return False
if arr.ndim != 2:
return False
if arr.shape[1] != 128:
return False
... | def check_pianoroll(arr) | Return True if the array is a standard piano-roll matrix. Otherwise,
return False. Raise TypeError if the input object is not a numpy array. | 2.148167 | 2.172539 | 0.988782 |
_check_supported(obj)
copied = deepcopy(obj)
copied.binarize(threshold)
return copied | def binarize(obj, threshold=0) | Return a copy of the object with binarized piano-roll(s).
Parameters
----------
threshold : int or float
Threshold to binarize the piano-roll(s). Default to zero. | 7.253647 | 11.036232 | 0.657258 |
_check_supported(obj)
copied = deepcopy(obj)
copied.clip(lower, upper)
return copied | def clip(obj, lower=0, upper=127) | Return a copy of the object with piano-roll(s) clipped by a lower bound
and an upper bound specified by `lower` and `upper`, respectively.
Parameters
----------
lower : int or float
The lower bound to clip the piano-roll. Default to 0.
upper : int or float
The upper bound to clip th... | 6.137272 | 9.723185 | 0.6312 |
_check_supported(obj)
copied = deepcopy(obj)
copied.pad(pad_length)
return copied | def pad(obj, pad_length) | Return a copy of the object with piano-roll padded with zeros at the end
along the time axis.
Parameters
----------
pad_length : int
The length to pad along the time axis with zeros. | 6.623208 | 10.187099 | 0.650156 |
_check_supported(obj)
copied = deepcopy(obj)
copied.pad_to_multiple(factor)
return copied | def pad_to_multiple(obj, factor) | Return a copy of the object with its piano-roll padded with zeros at the
end along the time axis with the minimal length that make the length of
the resulting piano-roll a multiple of `factor`.
Parameters
----------
factor : int
The value which the length of the resulting piano-roll will be... | 6.376031 | 10.097955 | 0.631418 |
if not isinstance(obj, Multitrack):
raise TypeError("Support only `pypianoroll.Multitrack` class objects")
copied = deepcopy(obj)
copied.pad_to_same()
return copied | def pad_to_same(obj) | Return a copy of the object with shorter piano-rolls padded with zeros
at the end along the time axis to the length of the piano-roll with the
maximal length. | 7.100032 | 5.867205 | 1.210122 |
if not filepath.endswith(('.mid', '.midi', '.MID', '.MIDI')):
raise ValueError("Only MIDI files are supported")
return Multitrack(filepath, beat_resolution=beat_resolution, name=name) | def parse(filepath, beat_resolution=24, name='unknown') | Return a :class:`pypianoroll.Multitrack` object loaded from a MIDI
(.mid, .midi, .MID, .MIDI) file.
Parameters
----------
filepath : str
The file path to the MIDI file. | 3.290745 | 2.869472 | 1.146812 |
if not isinstance(obj, Multitrack):
raise TypeError("Support only `pypianoroll.Multitrack` class objects")
obj.save(filepath, compressed) | def save(filepath, obj, compressed=True) | Save the object to a .npz file.
Parameters
----------
filepath : str
The path to save the file.
obj: `pypianoroll.Multitrack` objects
The object to be saved. | 7.594944 | 5.497353 | 1.381564 |
_check_supported(obj)
copied = deepcopy(obj)
copied.transpose(semitone)
return copied | def transpose(obj, semitone) | Return a copy of the object with piano-roll(s) transposed by `semitones`
semitones.
Parameters
----------
semitone : int
Number of semitones to transpose the piano-roll(s). | 6.022088 | 10.241092 | 0.588032 |
_check_supported(obj)
copied = deepcopy(obj)
length = copied.get_active_length()
copied.pianoroll = copied.pianoroll[:length]
return copied | def trim_trailing_silence(obj) | Return a copy of the object with trimmed trailing silence of the
piano-roll(s). | 7.972359 | 6.668567 | 1.195513 |
if not isinstance(obj, Multitrack):
raise TypeError("Support only `pypianoroll.Multitrack` class objects")
obj.write(filepath) | def write(obj, filepath) | Write the object to a MIDI file.
Parameters
----------
filepath : str
The path to write the MIDI file. | 7.763184 | 7.183427 | 1.080707 |
if not isinstance(pianoroll, np.ndarray):
raise TypeError("`pianoroll` must be of np.ndarray type.")
if not (np.issubdtype(pianoroll.dtype, np.bool_)
or np.issubdtype(pianoroll.dtype, np.number)):
raise TypeError("The data type of `pianoroll` must be np.bool_ or a "
... | def _validate_pianoroll(pianoroll) | Raise an error if the input array is not a standard pianoroll. | 1.910924 | 1.894665 | 1.008581 |
_validate_pianoroll(pianoroll)
reshaped = pianoroll[:, :120].reshape(-1, 12, 10)
reshaped[..., :8] += pianoroll[:, 120:].reshape(-1, 1, 8)
return np.sum(reshaped, 1) | def _to_chroma(pianoroll) | Return the unnormalized chroma features of a pianoroll. | 2.94459 | 2.857388 | 1.030518 |
_validate_pianoroll(pianoroll)
reshaped = pianoroll.reshape(-1, beat_resolution * pianoroll.shape[1])
n_empty_beats = np.count_nonzero(reshaped.any(1))
return n_empty_beats / len(reshaped) | def empty_beat_rate(pianoroll, beat_resolution) | Return the ratio of empty beats to the total number of beats in a
pianoroll. | 2.961781 | 2.79248 | 1.060628 |
_validate_pianoroll(pianoroll)
chroma = _to_chroma(pianoroll)
return np.count_nonzero(np.any(chroma, 0)) | def n_pitche_classes_used(pianoroll) | Return the number of unique pitch classes used in a pianoroll. | 3.500182 | 3.410502 | 1.026295 |
_validate_pianoroll(pianoroll)
if np.issubdtype(pianoroll.dtype, np.bool_):
pianoroll = pianoroll.astype(np.uint8)
padded = np.pad(pianoroll, ((1, 1), (0, 0)), 'constant')
diff = np.diff(padded, axis=0).reshape(-1)
onsets = (diff > 0).nonzero()[0]
offsets = (diff < 0).nonzero()[0]
... | def qualified_note_rate(pianoroll, threshold=2) | Return the ratio of the number of the qualified notes (notes longer than
`threshold` (in time step)) to the total number of notes in a pianoroll. | 2.257025 | 2.245966 | 1.004924 |
_validate_pianoroll(pianoroll)
n_poly = np.count_nonzero(np.count_nonzero(pianoroll, 1) > threshold)
return n_poly / len(pianoroll) | def polyphonic_rate(pianoroll, threshold=2) | Return the ratio of the number of time steps where the number of pitches
being played is larger than `threshold` to the total number of time steps
in a pianoroll. | 2.696493 | 2.875043 | 0.937897 |
if beat_resolution not in (4, 6, 8, 9, 12, 16, 18, 24):
raise ValueError("Unsupported beat resolution. Only 4, 6, 8 ,9, 12, "
"16, 18, 42 are supported.")
_validate_pianoroll(pianoroll)
def _drum_pattern_mask(res, tol):
if res == 24:
drum_p... | def drum_in_pattern_rate(pianoroll, beat_resolution, tolerance=0.1) | Return the ratio of the number of drum notes that lie on the drum
pattern (i.e., at certain time steps) to the total number of drum notes. | 1.908838 | 1.893038 | 1.008347 |
if not isinstance(key, int):
raise TypeError("`key` must an integer.")
if key > 11 or key < 0:
raise ValueError("`key` must be in an integer in between 0 and 11.")
if kind not in ('major', 'minor'):
raise ValueError("`kind` must be one of 'major' or 'minor'.")
_validate_pian... | def in_scale_rate(pianoroll, key=3, kind='major') | Return the ratio of the number of nonzero entries that lie in a specific
scale to the total number of nonzero entries in a pianoroll. Default to C
major scale. | 2.141032 | 2.103438 | 1.017873 |
_validate_pianoroll(pianoroll_1)
_validate_pianoroll(pianoroll_2)
assert len(pianoroll_1) == len(pianoroll_2), (
"Input pianorolls must have the same length.")
def _tonal_matrix(r1, r2, r3):
tonal_matrix = np.empty((6, 12))
tonal_matrix[0] = r1 * np.sin(np.arange(1... | def tonal_distance(pianoroll_1, pianoroll_2, beat_resolution, r1=1.0, r2=1.0,
r3=0.5) | Return the tonal distance [1] between the two input pianorolls.
[1] Christopher Harte, Mark Sandler, and Martin Gasser. Detecting
harmonic change in musical audio. In Proc. ACM Workshop on Audio and
Music Computing Multimedia, 2006. | 1.6112 | 1.627604 | 0.989921 |
if not self.is_binarized():
self.pianoroll[self.pianoroll.nonzero()] = value
return
if dtype is None:
if isinstance(value, int):
dtype = int
elif isinstance(value, float):
dtype = float
nonzero = self.pianor... | def assign_constant(self, value, dtype=None) | Assign a constant value to all nonzeros in the pianoroll. If the
pianoroll is not binarized, its data type will be preserved. If the
pianoroll is binarized, it will be casted to the type of `value`.
Arguments
---------
value : int or float
The constant value to be as... | 2.642281 | 2.187109 | 1.208116 |
if not self.is_binarized():
self.pianoroll = (self.pianoroll > threshold) | def binarize(self, threshold=0) | Binarize the pianoroll.
Parameters
----------
threshold : int or float
A threshold used to binarize the pianorolls. Defaults to zero. | 5.594303 | 5.025837 | 1.113109 |
# pianoroll
if not isinstance(self.pianoroll, np.ndarray):
raise TypeError("`pianoroll` must be a numpy array.")
if not (np.issubdtype(self.pianoroll.dtype, np.bool_)
or np.issubdtype(self.pianoroll.dtype, np.number)):
raise TypeError("The data ty... | def check_validity(self) | Raise error if any invalid attribute found. | 1.781616 | 1.74328 | 1.021991 |
self.pianoroll = self.pianoroll.clip(lower, upper) | def clip(self, lower=0, upper=127) | Clip the pianoroll by the given lower and upper bounds.
Parameters
----------
lower : int or float
The lower bound to clip the pianoroll. Defaults to 0.
upper : int or float
The upper bound to clip the pianoroll. Defaults to 127. | 4.324906 | 3.536804 | 1.222829 |
nonzero_steps = np.any(self.pianoroll, axis=1)
inv_last_nonzero_step = np.argmax(np.flip(nonzero_steps, axis=0))
active_length = self.pianoroll.shape[0] - inv_last_nonzero_step
return active_length | def get_active_length(self) | Return the active length (i.e., without trailing silence) of the
pianoroll. The unit is time step.
Returns
-------
active_length : int
The active length (i.e., without trailing silence) of the pianoroll. | 3.980884 | 3.492857 | 1.139721 |
if self.pianoroll.shape[1] < 1:
raise ValueError("Cannot compute the active pitch range for an "
"empty pianoroll")
lowest = 0
highest = 127
while lowest < highest:
if np.any(self.pianoroll[:, lowest]):
break
... | def get_active_pitch_range(self) | Return the active pitch range as a tuple (lowest, highest).
Returns
-------
lowest : int
The lowest active pitch in the pianoroll.
highest : int
The highest active pitch in the pianoroll. | 2.558522 | 2.321702 | 1.102003 |
is_binarized = np.issubdtype(self.pianoroll.dtype, np.bool_)
return is_binarized | def is_binarized(self) | Return True if the pianoroll is already binarized. Otherwise, return
False.
Returns
-------
is_binarized : bool
True if the pianoroll is already binarized; otherwise, False. | 4.700046 | 3.541803 | 1.327021 |
self.pianoroll = np.pad(
self.pianoroll, ((0, pad_length), (0, 0)), 'constant') | def pad(self, pad_length) | Pad the pianoroll with zeros at the end along the time axis.
Parameters
----------
pad_length : int
The length to pad with zeros along the time axis. | 3.21232 | 2.930999 | 1.095982 |
remainder = self.pianoroll.shape[0] % factor
if remainder:
pad_width = ((0, (factor - remainder)), (0, 0))
self.pianoroll = np.pad(self.pianoroll, pad_width, 'constant') | def pad_to_multiple(self, factor) | Pad the pianoroll with zeros at the end along the time axis with the
minimum length that makes the resulting pianoroll length a multiple of
`factor`.
Parameters
----------
factor : int
The value which the length of the resulting pianoroll will be
a multip... | 2.626986 | 2.893454 | 0.907907 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.