url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 48
51
| id
int64 600M
3.43B
| node_id
stringlengths 18
24
| number
int64 2
7.78k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
stringdate 2020-04-14 18:18:51
2025-09-18 08:25:34
| updated_at
stringdate 2020-04-29 09:23:05
2025-09-22 08:47:53
| closed_at
stringlengths 20
20
โ | author_association
stringclasses 4
values | type
null | active_lock_reason
null | draft
bool 0
classes | pull_request
dict | body
stringlengths 0
228k
โ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6118
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6118/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6118/events
|
https://github.com/huggingface/datasets/issues/6118
| 1,835,940,417
|
I_kwDODunzps5tbjpB
| 6,118
|
IterableDataset.from_generator() fails with pickle error when provided a generator or iterator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4",
"events_url": "https://api.github.com/users/finkga/events{/privacy}",
"followers_url": "https://api.github.com/users/finkga/followers",
"following_url": "https://api.github.com/users/finkga/following{/other_user}",
"gists_url": "https://api.github.com/users/finkga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/finkga",
"id": 1281051,
"login": "finkga",
"node_id": "MDQ6VXNlcjEyODEwNTE=",
"organizations_url": "https://api.github.com/users/finkga/orgs",
"received_events_url": "https://api.github.com/users/finkga/received_events",
"repos_url": "https://api.github.com/users/finkga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finkga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/finkga",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! `IterableDataset.from_generator` expects a generator function, not the object (to be consistent with `Dataset.from_generator`).\r\n\r\nYou can fix the above snippet as follows:\r\n```python\r\ntrain_dataset = IterableDataset.from_generator(line_generator, fn_kwargs={\"files\": model_training_files})\r\n```",
"to anyone reaching this issue, the argument is `gen_kwargs`:\r\n```py\r\ntrain_dataset = IterableDataset.from_generator(line_generator, gen_kwargs={\"files\": model_training_files})\r\n```",
"This still fails, for both Dataset and IterableDataset\r\n\r\n```python\r\n records = [1, 2, 3]\r\n\r\n gen = ({\"row\": str(x)} for x in records)\r\n\r\n dataset = IterableDataset.from_generator(generator=gen)\r\n ```\r\n\r\nEdit: gen_kwargs must be picklable, it can't be an iterator even if you are not doing multiprocessing, the same goes for included namespace variables."
] |
2023-08-04T01:45:04Z
|
2024-12-18T18:30:57Z
| null |
NONE
| null | null | null | null |
### Describe the bug
**Description**
Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator.
**Code example**
```
def line_generator(files: List[Path]):
if isinstance(files, str):
files = [Path(files)]
for file in files:
if isinstance(file, str):
file = Path(file)
yield from open(file,'r').readlines()
...
model_training_files = ['file1.txt', 'file2.txt', 'file3.txt']
train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files))
```
**Traceback**
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields
yield
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps
dump(obj, file)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump
Pickler(file, recurse=True).dump(obj)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump
self.save(obj)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems
save(v)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'generator' object
### Steps to reproduce the bug
1. Create a set of text files to iterate over.
2. Create a generator that returns the lines in each file until all files are exhausted.
3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator().
4. Wait for the explosion.
### Expected behavior
I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function.
### Environment info
datasets.__version__ == '2.13.1'
Python 3.9.6
Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6118/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6116
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6116/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6116/events
|
https://github.com/huggingface/datasets/issues/6116
| 1,835,098,484
|
I_kwDODunzps5tYWF0
| 6,116
|
[Docs] The "Process" how-to guide lacks description of `select_columns` function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unifyh",
"id": 18213435,
"login": "unifyh",
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"repos_url": "https://api.github.com/users/unifyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unifyh",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Great idea, feel free to open a PR! :)"
] |
2023-08-03T13:45:10Z
|
2023-08-16T10:02:53Z
|
2023-08-16T10:02:53Z
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.
### Motivation
This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.
Mentioning it in the guide would help future users discover this added feature.
### Your contribution
I could submit a PR to add a brief description of the function to said guide.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6116/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6114
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6114/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6114/events
|
https://github.com/huggingface/datasets/issues/6114
| 1,834,015,584
|
I_kwDODunzps5tUNtg
| 6,114
|
Cache not being used when loading commonvoice 8.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4",
"events_url": "https://api.github.com/users/clabornd/events{/privacy}",
"followers_url": "https://api.github.com/users/clabornd/followers",
"following_url": "https://api.github.com/users/clabornd/following{/other_user}",
"gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clabornd",
"id": 31082141,
"login": "clabornd",
"node_id": "MDQ6VXNlcjMxMDgyMTQx",
"organizations_url": "https://api.github.com/users/clabornd/orgs",
"received_events_url": "https://api.github.com/users/clabornd/received_events",
"repos_url": "https://api.github.com/users/clabornd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clabornd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clabornd",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"You can avoid this by using the `revision` parameter in `load_dataset` to always force downloading a specific commit (if not specified it defaults to HEAD, hence the redownload).",
"Thanks @mariosasko this works well, looks like I should have read the documentation a bit more carefully. \r\n\r\nIt is still a bit confusing which hash I should provide: passing `revision = c8fd66e85f086e3abb11eeee55b1737a3d1e8487` from https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/commits/main caused the cached version at `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a` to be loaded, so I had to know that it was the previous commit unless I've missed something else."
] |
2023-08-02T23:18:11Z
|
2023-08-18T23:59:00Z
|
2023-08-18T23:59:00Z
|
NONE
| null | null | null | null |
### Describe the bug
I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially:
```
dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")
```
it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32`
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
2. dataset is updated by maintainers
3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
### Expected behavior
I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`.
Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded?
EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example:
```
load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en")
> ...
> File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str)
1937 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
...
1794 e = e.__context__
-> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
datasets==2.7.0
python==3.10.8
OS: AWS Linux
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4",
"events_url": "https://api.github.com/users/clabornd/events{/privacy}",
"followers_url": "https://api.github.com/users/clabornd/followers",
"following_url": "https://api.github.com/users/clabornd/following{/other_user}",
"gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clabornd",
"id": 31082141,
"login": "clabornd",
"node_id": "MDQ6VXNlcjMxMDgyMTQx",
"organizations_url": "https://api.github.com/users/clabornd/orgs",
"received_events_url": "https://api.github.com/users/clabornd/received_events",
"repos_url": "https://api.github.com/users/clabornd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clabornd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clabornd",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6114/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6113
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6113/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6113/events
|
https://github.com/huggingface/datasets/issues/6113
| 1,833,854,030
|
I_kwDODunzps5tTmRO
| 6,113
|
load_dataset() fails with streamlit caching inside docker
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4",
"events_url": "https://api.github.com/users/fierval/events{/privacy}",
"followers_url": "https://api.github.com/users/fierval/followers",
"following_url": "https://api.github.com/users/fierval/following{/other_user}",
"gists_url": "https://api.github.com/users/fierval/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fierval",
"id": 987574,
"login": "fierval",
"node_id": "MDQ6VXNlcjk4NzU3NA==",
"organizations_url": "https://api.github.com/users/fierval/orgs",
"received_events_url": "https://api.github.com/users/fierval/received_events",
"repos_url": "https://api.github.com/users/fierval/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fierval/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fierval",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! This should be fixed in the latest (patch) release (run `pip install -U datasets` to install it). This behavior was due to a bug in our authentication logic."
] |
2023-08-02T20:20:26Z
|
2023-08-21T18:18:27Z
|
2023-08-21T18:18:27Z
|
NONE
| null | null | null | null |
### Describe the bug
When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message:
EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files
Traceback:
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 62, in <module>
dashboard()
File "/home/user/app/app.py", line 47, in dashboard
feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper
return cached_func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/home/user/app/hf_interface.py", line 16, in load_data
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns
raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
### Steps to reproduce the bug
```python
@st.cache_resource
def load_data(repo_id: str, hf_token=None):
"""Load data from HuggingFace Hub
"""
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"])
return hf_dataset
```
### Expected behavior
Expect to load.
Note: works fine with datasets==2.13.1
### Environment info
datasets==2.14.2,
Ubuntu bionic-based Docker container.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6113/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6112
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6112/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6112/events
|
https://github.com/huggingface/datasets/issues/6112
| 1,833,693,299
|
I_kwDODunzps5tS_Bz
| 6,112
|
yaml error using push_to_hub with generated README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4",
"events_url": "https://api.github.com/users/kevintee/events{/privacy}",
"followers_url": "https://api.github.com/users/kevintee/followers",
"following_url": "https://api.github.com/users/kevintee/following{/other_user}",
"gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kevintee",
"id": 1643887,
"login": "kevintee",
"node_id": "MDQ6VXNlcjE2NDM4ODc=",
"organizations_url": "https://api.github.com/users/kevintee/orgs",
"received_events_url": "https://api.github.com/users/kevintee/received_events",
"repos_url": "https://api.github.com/users/kevintee/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevintee/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kevintee",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting! This is a bug in converting the `ArrayXD` types to YAML. It will be fixed soon."
] |
2023-08-02T18:21:21Z
|
2023-12-12T15:00:44Z
|
2023-12-12T15:00:44Z
|
NONE
| null | null | null | null |
### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
```
and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error:
```
Traceback (most recent call last):
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status
response.raise_for_status()
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module>
build_dataset()
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset
push_to_hub(dataset, "multitask_document_classification_dataset")
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub
dataset.push_to_hub(f"looppayments/{dataset_name}", private=True)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file
commit_info = self.create_commit(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e)
Bad request for commit endpoint:
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9)
7 | - 3
8 | - 224
9 | - 224
10 | dtype: float64
--------------^
11 | - name: input_ids
12 | sequence: int64
```
My guess is that the auto-generated yaml is unable to be parsed for some reason.
### Steps to reproduce the bug
The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet:
```
from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value
from PIL import Image
from transformers import AutoProcessor
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
def preprocess_dataset(rows):
# Get images
images = [
Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"]
]
encoding = processor(
images,
rows["tokens"],
boxes=rows["bbox"],
truncation=True,
padding="max_length",
)
encoding["tokens"] = rows["tokens"]
return encoding
dataset = dataset.map(
preprocess_dataset,
batched=True,
batch_size=5,
features=features,
)
```
### Expected behavior
Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error.
### Environment info
- `datasets` version: 2.14.2
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6112/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6111
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6111/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6111/events
|
https://github.com/huggingface/datasets/issues/6111
| 1,832,781,654
|
I_kwDODunzps5tPgdW
| 6,111
|
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4",
"events_url": "https://api.github.com/users/2catycm/events{/privacy}",
"followers_url": "https://api.github.com/users/2catycm/followers",
"following_url": "https://api.github.com/users/2catycm/following{/other_user}",
"gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/2catycm",
"id": 41530341,
"login": "2catycm",
"node_id": "MDQ6VXNlcjQxNTMwMzQx",
"organizations_url": "https://api.github.com/users/2catycm/orgs",
"received_events_url": "https://api.github.com/users/2catycm/received_events",
"repos_url": "https://api.github.com/users/2catycm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2catycm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/2catycm",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"any idea?",
"This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n\r\n`load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`",
"> This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n> \r\n> `load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`\r\n\r\nThanks for your help. This works."
] |
2023-08-02T09:17:29Z
|
2023-08-29T02:00:28Z
|
2023-08-29T02:00:28Z
|
NONE
| null | null | null | null |
### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object.
However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects.
### Steps to reproduce the bug
Steps to reproduce the bug:
1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main
2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box:
```bash
cd my_directory_absolute
git lfs install
git clone https://huggingface.co/datasets/cifar100
ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK.
```
3. Write A python file to try to load the dataset
```python
from datasets import load_dataset, load_from_disk
dataset = load_from_disk("my_directory_absolute/cifar100")
```
Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead.
4. Then you will see the error reported:
```log
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[5], line 9
1 from datasets import load_dataset, load_from_disk
----> 9 dataset = load_from_disk("my_directory_absolute/cifar100")
File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options)
2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
2231 else:
-> 2232 raise FileNotFoundError(
2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory."
2234 )
FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory.
```
### Expected behavior
The dataset should be load successfully.
### Environment info
```bash
datasets-cli env
```
-> results:
```txt
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.14.2
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4",
"events_url": "https://api.github.com/users/2catycm/events{/privacy}",
"followers_url": "https://api.github.com/users/2catycm/followers",
"following_url": "https://api.github.com/users/2catycm/following{/other_user}",
"gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/2catycm",
"id": 41530341,
"login": "2catycm",
"node_id": "MDQ6VXNlcjQxNTMwMzQx",
"organizations_url": "https://api.github.com/users/2catycm/orgs",
"received_events_url": "https://api.github.com/users/2catycm/received_events",
"repos_url": "https://api.github.com/users/2catycm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2catycm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/2catycm",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6111/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6110
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6110/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6110/events
|
https://github.com/huggingface/datasets/issues/6110
| 1,831,110,633
|
I_kwDODunzps5tJIfp
| 6,110
|
[BUG] Dataset initialized from in-memory data does not create cache.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MattYoon",
"id": 57797966,
"login": "MattYoon",
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MattYoon",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached."
] |
2023-08-01T11:58:58Z
|
2023-08-17T14:03:01Z
|
2023-08-17T14:03:00Z
|
NONE
| null | null | null | null |
### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was run the second time so the map function can be loaded from cache if exists
from datasets import load_dataset, Dataset
dataset = load_dataset("tatsu-lab/alpaca")['train']
dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map
print(len(dataset.cache_files))
# 1
# copy the exact same data but initialize from a dictionary
memory_dataset = Dataset.from_dict({
'instruction': dataset['instruction'],
'input': dataset['input'],
'output': dataset['output'],
'text': dataset['text']})
memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map
print(len(memory_dataset.cache_files))
# Map: 100%|โโโโโโโโโโ| 52002[/52002]
# 0
```
### Expected behavior
The `map` function should create cache regardless of the method the `Dataset` was created.
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6110/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6109
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6109/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6109/events
|
https://github.com/huggingface/datasets/issues/6109
| 1,830,753,793
|
I_kwDODunzps5tHxYB
| 6,109
|
Problems in downloading Amazon reviews from HF
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4",
"events_url": "https://api.github.com/users/610v4nn1/events{/privacy}",
"followers_url": "https://api.github.com/users/610v4nn1/followers",
"following_url": "https://api.github.com/users/610v4nn1/following{/other_user}",
"gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/610v4nn1",
"id": 52964960,
"login": "610v4nn1",
"node_id": "MDQ6VXNlcjUyOTY0OTYw",
"organizations_url": "https://api.github.com/users/610v4nn1/orgs",
"received_events_url": "https://api.github.com/users/610v4nn1/received_events",
"repos_url": "https://api.github.com/users/610v4nn1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/610v4nn1",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\nSee: https://huggingface.co/datasets/amazon_reviews_multi/discussions/4#64c3898db63057f1fd3ce1a0 ",
"The dataset can be accessed from https://www.kaggle.com/datasets/mexwell/amazon-reviews-multi.",
"For those willing to transform the csv files from Kaggle into Huggingface datasets for their NLP course (exercise on summarisation), you can use this code on Google Collab:\r\n\r\n`from datasets import load_dataset\r\n\r\nimport pandas as pd\r\nfrom datasets import Dataset, DatasetDict\r\n\r\n# Load your CSV previously downloaded files from Kaggle on Google Collab\r\ntrain_csv_path = \"/content/train.csv\"\r\nvalidation_csv_path = \"/content/validation.csv\"\r\ntest_csv_path = '/content/test.csv'\r\n\r\n# Read CSV files into pandas DataFrames\r\ntrain_df = pd.read_csv(train_csv_path, engine='python')\r\nvalidation_df = pd.read_csv(validation_csv_path, engine='python')\r\ntest_df = pd.read_csv(test_csv_path, engine='python')\r\n\r\n# Filter by language ('es' for Spanish and 'en' for English)\r\nspanish_train_df = train_df[train_df['language'] == 'es']\r\nspanish_validation_df = validation_df[validation_df['language'] == 'es']\r\nspanish_test_df = test_df[test_df['language'] == 'es']\r\n\r\nenglish_train_df = train_df[train_df['language'] == 'en']\r\nenglish_validation_df = validation_df[validation_df['language'] == 'en']\r\nenglish_test_df = test_df[test_df['language'] == 'en']\r\n\r\n# Create Hugging Face datasets\r\nspanish_dataset = DatasetDict({\r\n 'train': Dataset.from_pandas(spanish_train_df),\r\n 'validation': Dataset.from_pandas(spanish_validation_df),\r\n 'test': Dataset.from_pandas(spanish_test_df)\r\n})\r\n\r\nenglish_dataset = DatasetDict({\r\n 'train': Dataset.from_pandas(english_train_df),\r\n 'validation': Dataset.from_pandas(english_validation_df),\r\n 'test': Dataset.from_pandas(english_test_df)\r\n})\r\nenglish_dataset = english_dataset.remove_columns(['Unnamed: 0', '__index_level_0__'])\r\nspanish_dataset = spanish_dataset.remove_columns(['Unnamed: 0', '__index_level_0__'])`"
] |
2023-08-01T08:38:29Z
|
2025-07-18T17:47:30Z
|
2023-08-02T07:12:07Z
|
NONE
| null | null | null | null |
### Describe the bug
I have a script downloading `amazon_reviews_multi`.
When the download starts, I get
```
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.43MB/s]
Downloading data files: 100%|โโโโโโโโโโ| 1/1 [00:01<00:00, 1.54s/it]
Extracting data files: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 842.40it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 928kB/s]
Downloading data files: 100%|โโโโโโโโโโ| 1/1 [00:01<00:00, 1.42s/it]
Extracting data files: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 832.70it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.81MB/s]
Downloading data files: 100%|โโโโโโโโโโ| 1/1 [00:01<00:00, 1.40s/it]
Extracting data files: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 1294.14it/s]
Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s]
```
the file is clearly too small to contain the requested dataset, in fact it contains en error message:
```
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error>
```
obviously the script fails:
```
> raise DatasetGenerationError("An error occurred while generating the dataset") from e
E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE")
### Expected behavior
I would expect the dataset to be downloaded and processed
### Environment info
* The problem is present with both datasets 2.12.0 and 2.14.2
* python version 3.10.12
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6109/timeline
| null |
not_planned
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6108
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6108/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6108/events
|
https://github.com/huggingface/datasets/issues/6108
| 1,830,347,187
|
I_kwDODunzps5tGOGz
| 6,108
|
Loading local datasets got strangely stuck
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4",
"events_url": "https://api.github.com/users/LoveCatc/events{/privacy}",
"followers_url": "https://api.github.com/users/LoveCatc/followers",
"following_url": "https://api.github.com/users/LoveCatc/following{/other_user}",
"gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LoveCatc",
"id": 48412571,
"login": "LoveCatc",
"node_id": "MDQ6VXNlcjQ4NDEyNTcx",
"organizations_url": "https://api.github.com/users/LoveCatc/orgs",
"received_events_url": "https://api.github.com/users/LoveCatc/received_events",
"repos_url": "https://api.github.com/users/LoveCatc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LoveCatc",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.",
"I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G.",
"We use a generic multiprocessing code, so there is little we can do about this - unfortunately, turning off multiprocessing seems to be the only solution. Multithreading would make our code easier to maintain and (most likely) avoid issues such as this one, but we cannot use it until the GIL is dropped (no-GIL Python should be released in 2024, so we can start exploring this then)",
"The problem seems to be the `Generating train split`. Is it possible to avoid that? I have a dataset saved, just want to load it but somehow running into issues with that again.",
"Hey guys, recently I ran into this problem again and I spent one whole day trying to locate the problem. Finally I found the problem seems to be with `pyarrow`'s json parser, and it seems a long-existing problem. Similar issue can be found in #2181. Anyway, my solution is to adjust the `load_dataset`'s parameter `chunksize`. You can inspect the parameter set in `datasets/packaged_modules/json/json.py`, now the actual chunksize should be very small, and you can increase the value. For me, `chunksize=10<<23` could solve the stuck problem. But I also find that too big `chunksize`, like `10 << 30`, would also cause a stuck, which is rather weird. I think I may explore this when I am free. And hope this can help those who also encounter the same problem. ",
"Experiencing the same issue with the `kaist-ai/Feedback-Collection` dataset, which is comparatively small i.e. 100k rows.\r\nCode to reproduce\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"kaist-ai/Feedback-Collection\")\r\n```\r\n\r\nI have tried setting `num_proc=1` as well as `chunksize=1024, 64` but problem persists. Any pointers?",
"sorry to disturb, at datasets==2.21.0, I add `chunksize` parameter but got error \"doesn't have a 'chunksize' key\". Is it got removed?"
] |
2023-08-01T02:28:06Z
|
2024-12-31T16:01:00Z
| null |
NONE
| null | null | null | null |
### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train']
```
However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way:
```python
dlist = list()
for _ in LIST_OF_FILE_PATHS:
dlist.append(load_dataset("json", data_files=_)['train'])
ds = concatenate_datasets(dlist)
```
I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error:
```bash
^C
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get
res = self._reader.recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Generating train split: 92431 examples [01:23, 1104.25 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module>
a = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split
for job_id, done, content in iflatmap_unordered(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get
raise TimeoutError
multiprocess.context.TimeoutError
```
I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram.
Thanks for your efforts and patience! Any suggestion or help would be appreciated.
### Steps to reproduce the bug
1. use load_dataset() with `data_files = LIST_OF_FILES`
### Expected behavior
All the files should be smoothly loaded.
### Environment info
- Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked.
- `datasets` version: 2.14.2
- Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609
- Pandas version: 1.5.2
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6108/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6106
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6106/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6106/events
|
https://github.com/huggingface/datasets/issues/6106
| 1,829,131,223
|
I_kwDODunzps5tBlPX
| 6,106
|
load local json_file as dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CiaoHe",
"id": 39040787,
"login": "CiaoHe",
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CiaoHe",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! We use PyArrow to read JSON files, and PyArrow doesn't allow different value types in the same column. #5776 should address this.\r\n\r\nIn the meantime, you can combine `Dataset.from_generator` with the above code to cast the values to the same type. ",
"Thanks for your help!"
] |
2023-07-31T12:53:49Z
|
2023-08-18T01:46:35Z
|
2023-08-18T01:46:35Z
|
NONE
| null | null | null | null |
### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double
### Expected behavior
Should allow some columns are 'float' type, at least it should convert those columns to str type.
I tried to avoid the error by naively convert the float item to str:
```python
# if col type is not str, we need to convert it to str
mapping = {}
for col in keys:
if isinstance(dataset[0][col], str):
mapping[col] = [row.get(col) for row in dataset]
else:
mapping[col] = [str(row.get(col)) for row in dataset]
```
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CiaoHe",
"id": 39040787,
"login": "CiaoHe",
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CiaoHe",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6106/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6104
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6104/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6104/events
|
https://github.com/huggingface/datasets/issues/6104
| 1,828,959,107
|
I_kwDODunzps5tA7OD
| 6,104
|
HF Datasets data access is extremely slow even when in memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462"
] |
2023-07-31T11:12:19Z
|
2023-08-01T11:22:43Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long?
It's faster to produce the dataset from scratch than to access it from HF Datasets!
### Steps to reproduce the bug
I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1).
```python
#!/usr/bin/env python3
import sys
import time
import torch
from datasets import load_dataset
def main(dataset_name):
# Start the timer
start_time = time.time()
# Load the dataset from Hugging Face Hub
dataset = load_dataset(dataset_name)
# Set the dataset format as torch
dataset.set_format(type="torch")
# Perform an identity map
dataset = dataset.map(lambda example: example, batched=True, batch_size=20)
# End the timer
end_time = time.time()
# Print the time taken
print(f"Time taken: {end_time - start_time:.2f} seconds")
if __name__ == "__main__":
dataset_name = "NightMachinery/hf_datasets_bug1"
print(f"dataset_name: {dataset_name}")
main(dataset_name)
```
### Expected behavior
_
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6104/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6100
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6100/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6100/events
|
https://github.com/huggingface/datasets/issues/6100
| 1,828,118,930
|
I_kwDODunzps5s9uGS
| 6,100
|
TypeError when loading from GCP bucket
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ",
"We have fixed it. We are planning to do a patch release today."
] |
2023-07-30T23:03:00Z
|
2023-08-03T10:00:48Z
|
2023-08-01T10:38:55Z
|
NONE
| null | null | null | null |
### Describe the bug
Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1.
### Steps to reproduce the bug
Load any file from a GCP bucket:
```python
import datasets
datasets.load_dataset("json", data_files=["gs://..."])
```
The following exception is raised:
```python
Traceback (most recent call last):
...
packages/datasets/data_files.py", line 335, in resolve_pattern
protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""
TypeError: can only concatenate tuple (not "str") to tuple
```
With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string.
### Expected behavior
The file should be loaded without exception.
### Environment info
- `datasets` version: 2.14.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6100/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6099
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6099/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6099/events
|
https://github.com/huggingface/datasets/issues/6099
| 1,827,893,576
|
I_kwDODunzps5s83FI
| 6,099
|
How do i get "amazon_us_reviews
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4",
"events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}",
"followers_url": "https://api.github.com/users/IqraBaluch/followers",
"following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}",
"gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IqraBaluch",
"id": 57810189,
"login": "IqraBaluch",
"node_id": "MDQ6VXNlcjU3ODEwMTg5",
"organizations_url": "https://api.github.com/users/IqraBaluch/orgs",
"received_events_url": "https://api.github.com/users/IqraBaluch/received_events",
"repos_url": "https://api.github.com/users/IqraBaluch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IqraBaluch",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\r\ntry:\r\n example = self.info.features.encode_example(record) if self.info.features is not None else record\r\nexcept Exception as e:\r\n print(record)\r\n```\r\n\r\nโฌ๏ธ\r\n\r\n```\r\n{'<?xml version=\"1.0\" encoding=\"UTF-8\"?>': '<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>N2HFJ82ZV8SZW9BV</RequestId><HostId>Zw2DQ0V2GdRmvH5qWEpumK4uj5+W8YPcilQbN9fLBr3VqQOcKPHOhUZLG3LcM9X5fkOetxp48Os=</HostId></Error>'}\r\n```",
"I'm getting same errors when loading this dataset",
"I have figured it out. there was an option of **parquet formated files** i downloaded some from there. ",
"this dataset is unfortunately no longer public",
"Thanks for reporting, @IqraBaluch.\r\n\r\nWe contacted the authors and unfortunately they reported that Amazon has decided to stop distributing this dataset.",
"If anyone still needs this dataset, you could find it on kaggle here : https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset",
"Thanks @Maryam-Mostafa ",
"@albertvillanova don't tell 'em, we have figured it out. XD",
"I noticed that some book data is missing, we can only get Books_v1_02 data. \r\nIs there any way we can get the Books_v1_00 and Books_v1_01? \r\nReally appreciate !!!",
"@albertvillanova will this dataset be retired given the data are no longer hosted on S3? What is done in cases such as these?"
] |
2023-07-30T11:02:17Z
|
2023-08-21T05:08:08Z
|
2023-08-10T05:02:35Z
|
NONE
| null | null | null | null |
### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02']
Example of usage:
`load_dataset('amazon_us_reviews', 'Wireless_v1_00')`]
__________________________________________________________________________
`amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00')
print(amazon_us_reviews)`
**ERROR**
`Generating` train split: 0%
0/960872 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1692 )
-> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record
1694 writer.write(example, key)
11 frames
KeyError: 'marketplace'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1711 e = e.__context__
-> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1713
1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Motivation
The dataset I'm using
https://huggingface.co/datasets/amazon_us_reviews
### Your contribution
What is the best way to load this data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4",
"events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}",
"followers_url": "https://api.github.com/users/IqraBaluch/followers",
"following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}",
"gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IqraBaluch",
"id": 57810189,
"login": "IqraBaluch",
"node_id": "MDQ6VXNlcjU3ODEwMTg5",
"organizations_url": "https://api.github.com/users/IqraBaluch/orgs",
"received_events_url": "https://api.github.com/users/IqraBaluch/received_events",
"repos_url": "https://api.github.com/users/IqraBaluch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IqraBaluch",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6099/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6097
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6097/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6097/events
|
https://github.com/huggingface/datasets/issues/6097
| 1,827,054,143
|
I_kwDODunzps5s5qI_
| 6,097
|
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4",
"events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}",
"followers_url": "https://api.github.com/users/aschoenauer-sebag/followers",
"following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}",
"gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aschoenauer-sebag",
"id": 2538048,
"login": "aschoenauer-sebag",
"node_id": "MDQ6VXNlcjI1MzgwNDg=",
"organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs",
"received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events",
"repos_url": "https://api.github.com/users/aschoenauer-sebag/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aschoenauer-sebag",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it."
] |
2023-07-28T20:31:59Z
|
2023-07-28T20:49:58Z
|
2023-07-28T20:49:58Z
|
NONE
| null | null | null | null |
### Describe the bug
Hi team!
I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc.
Are you able to reproduce what I observe?
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature:
```
{'vectors': array([[random values ...]])}
```
### Expected behavior
The expected behavior happens when the `set_format` method is not called:
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
# foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere
```
{'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4",
"events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}",
"followers_url": "https://api.github.com/users/aschoenauer-sebag/followers",
"following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}",
"gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aschoenauer-sebag",
"id": 2538048,
"login": "aschoenauer-sebag",
"node_id": "MDQ6VXNlcjI1MzgwNDg=",
"organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs",
"received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events",
"repos_url": "https://api.github.com/users/aschoenauer-sebag/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aschoenauer-sebag",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6097/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6090
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6090/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6090/events
|
https://github.com/huggingface/datasets/issues/6090
| 1,825,865,043
|
I_kwDODunzps5s1H1T
| 6,090
|
FilesIterable skips all the files after a hidden file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10785413?v=4",
"events_url": "https://api.github.com/users/dkrivosic/events{/privacy}",
"followers_url": "https://api.github.com/users/dkrivosic/followers",
"following_url": "https://api.github.com/users/dkrivosic/following{/other_user}",
"gists_url": "https://api.github.com/users/dkrivosic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dkrivosic",
"id": 10785413,
"login": "dkrivosic",
"node_id": "MDQ6VXNlcjEwNzg1NDEz",
"organizations_url": "https://api.github.com/users/dkrivosic/orgs",
"received_events_url": "https://api.github.com/users/dkrivosic/received_events",
"repos_url": "https://api.github.com/users/dkrivosic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dkrivosic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkrivosic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dkrivosic",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting. We've merged a PR with a fix."
] |
2023-07-28T07:25:57Z
|
2023-07-28T10:51:14Z
|
2023-07-28T10:50:11Z
|
NONE
| null | null | null | null |
### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8-
### Expected behavior
The script should print all the files except the hidden one.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6090/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6089
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6089/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6089/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6089/events
|
https://github.com/huggingface/datasets/issues/6089
| 1,825,761,476
|
I_kwDODunzps5s0ujE
| 6,089
|
AssertionError: daemonic processes are not allowed to have children
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).",
"> We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).\r\n\r\nGreat! Download takes more time than extract, multiple threads can download in parallel, which can speed up a lot."
] |
2023-07-28T06:04:00Z
|
2023-07-31T02:34:02Z
| null |
NONE
| null | null | null | null |
### Describe the bug
When I load_dataset with num_proc > 0 in a deamon process, I got an error:
```python
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract
return self.extract(self.download(url_or_urls))
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download
downloaded_path_or_paths = map_nested(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested
mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map
return _map_with_multiprocessing_pool(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool
with Pool(num_proc, initargs=initargs, initializer=initializer) as pool:
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__
self._repopulate_pool()
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start
assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^
AssertionError: daemonic processes are not allowed to have children
```
The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process.
### Steps to reproduce the bug
1. start a deamon process
2. run load_dataset with num_proc > 0
### Expected behavior
No error.
### Environment info
Python 3.11.4
datasets latest master
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6089/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6089/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6088
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6088/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6088/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6088/events
|
https://github.com/huggingface/datasets/issues/6088
| 1,825,665,235
|
I_kwDODunzps5s0XDT
| 6,088
|
Loading local data files initiates web requests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4",
"events_url": "https://api.github.com/users/lytning98/events{/privacy}",
"followers_url": "https://api.github.com/users/lytning98/followers",
"following_url": "https://api.github.com/users/lytning98/following{/other_user}",
"gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lytning98",
"id": 23375707,
"login": "lytning98",
"node_id": "MDQ6VXNlcjIzMzc1NzA3",
"organizations_url": "https://api.github.com/users/lytning98/orgs",
"received_events_url": "https://api.github.com/users/lytning98/received_events",
"repos_url": "https://api.github.com/users/lytning98/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytning98/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lytning98",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] |
2023-07-28T04:06:26Z
|
2023-07-28T05:02:22Z
|
2023-07-28T05:02:22Z
|
NONE
| null | null | null | null |
As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
```
But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows
```
in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode)
940 self.download_config = download_config
941 self.download_mode = download_mode
--> 942 increase_load_count(name, resource_type="dataset")
```
I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4",
"events_url": "https://api.github.com/users/lytning98/events{/privacy}",
"followers_url": "https://api.github.com/users/lytning98/followers",
"following_url": "https://api.github.com/users/lytning98/following{/other_user}",
"gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lytning98",
"id": 23375707,
"login": "lytning98",
"node_id": "MDQ6VXNlcjIzMzc1NzA3",
"organizations_url": "https://api.github.com/users/lytning98/orgs",
"received_events_url": "https://api.github.com/users/lytning98/received_events",
"repos_url": "https://api.github.com/users/lytning98/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytning98/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lytning98",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6088/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6088/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6087
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6087/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6087/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6087/events
|
https://github.com/huggingface/datasets/issues/6087
| 1,825,133,741
|
I_kwDODunzps5syVSt
| 6,087
|
fsspec dependency is set too low
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1085885?v=4",
"events_url": "https://api.github.com/users/iXce/events{/privacy}",
"followers_url": "https://api.github.com/users/iXce/followers",
"following_url": "https://api.github.com/users/iXce/following{/other_user}",
"gists_url": "https://api.github.com/users/iXce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iXce",
"id": 1085885,
"login": "iXce",
"node_id": "MDQ6VXNlcjEwODU4ODU=",
"organizations_url": "https://api.github.com/users/iXce/orgs",
"received_events_url": "https://api.github.com/users/iXce/received_events",
"repos_url": "https://api.github.com/users/iXce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iXce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iXce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iXce",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting! A PR with a fix has just been merged."
] |
2023-07-27T20:08:22Z
|
2023-07-28T10:07:56Z
|
2023-07-28T10:07:03Z
|
NONE
| null | null | null | null |
### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: https://github.com/fsspec/filesystem_spec/commit/9577c8a482eb0a69092913b81580942a68d66a76#diff-906155c7e926a9ff58b9f23369bb513b09b445f5b0f41fa2a84015d0b471c68cR180), however the dependency is set to 2021.11.1 https://github.com/huggingface/datasets/blob/main/setup.py#L129
### Steps to reproduce the bug
1. Install fsspec==2021.11.1
2. Install latest datasets==2.14.1
3. Import datasets, import fails due to lack of `fsspec.callbacks.TqdmCallback`
### Expected behavior
No import issue
### Environment info
N/A
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6087/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6087/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6086
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6086/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6086/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6086/events
|
https://github.com/huggingface/datasets/issues/6086
| 1,825,009,268
|
I_kwDODunzps5sx250
| 6,086
|
Support `fsspec` in `Dataset.to_<format>` methods
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
] | null |
[
"Hi @mariosasko unless someone's already working on it, I guess I can tackle it!",
"Hi! Sure, feel free to tackle this.",
"#self-assign",
"I'm assuming this should just cover `to_csv`, `to_parquet`, and `to_json`, right? As `to_list` and `to_dict` just return Python objects, `to_pandas` returns a `pandas.DataFrame` and `to_sql` just inserts into a SQL DB, is that right?",
"Fixed by #6096. "
] |
2023-07-27T19:08:37Z
|
2024-03-07T07:22:43Z
|
2024-03-07T07:22:42Z
|
COLLABORATOR
| null | null | null | null |
Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6086/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6086/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6084
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6084/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6084/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6084/events
|
https://github.com/huggingface/datasets/issues/6084
| 1,824,896,761
|
I_kwDODunzps5sxbb5
| 6,084
|
Changing pixel values of images in the Winoground dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/90359895?v=4",
"events_url": "https://api.github.com/users/ZitengWangNYU/events{/privacy}",
"followers_url": "https://api.github.com/users/ZitengWangNYU/followers",
"following_url": "https://api.github.com/users/ZitengWangNYU/following{/other_user}",
"gists_url": "https://api.github.com/users/ZitengWangNYU/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZitengWangNYU",
"id": 90359895,
"login": "ZitengWangNYU",
"node_id": "MDQ6VXNlcjkwMzU5ODk1",
"organizations_url": "https://api.github.com/users/ZitengWangNYU/orgs",
"received_events_url": "https://api.github.com/users/ZitengWangNYU/received_events",
"repos_url": "https://api.github.com/users/ZitengWangNYU/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZitengWangNYU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZitengWangNYU/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZitengWangNYU",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2023-07-27T17:55:35Z
|
2023-07-27T17:55:35Z
| null |
NONE
| null | null | null | null |
Hi, as I followed the instructions, with lasted "datasets" version:
"
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
"
I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slightly different.
I thought it was due to the package version difference, but today's morning I found out that my winoground dataset in colab became the same with the one in my hpc environment. The dataset in colab can produce the correct result but now it is gone as well.
Can you help me with this? What causes the datasets to have the wrong pixel values?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6084/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6084/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6079
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6079/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6079/events
|
https://github.com/huggingface/datasets/issues/6079
| 1,822,597,471
|
I_kwDODunzps5soqFf
| 6,079
|
Iterating over DataLoader based on HF datasets is stuck forever
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4",
"events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}",
"followers_url": "https://api.github.com/users/arindamsarkar93/followers",
"following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}",
"gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arindamsarkar93",
"id": 5454868,
"login": "arindamsarkar93",
"node_id": "MDQ6VXNlcjU0NTQ4Njg=",
"organizations_url": "https://api.github.com/users/arindamsarkar93/orgs",
"received_events_url": "https://api.github.com/users/arindamsarkar93/received_events",
"repos_url": "https://api.github.com/users/arindamsarkar93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arindamsarkar93",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"When the process starts to hang, can you interrupt it with CTRL + C and paste the error stack trace here? ",
"Thanks @mariosasko for your prompt response, here's the stack trace:\r\n\r\n```\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[12], line 4\r\n 2 t = time.time()\r\n 3 iter_ = 0\r\n----> 4 for batch in train_dataloader:\r\n 5 #batch_proc = streaming_obj.collect_streaming_data_batch(batch)\r\n 6 iter_ += 1\r\n 8 if iter_ == 1:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:634, in _BaseDataLoaderIter.__next__(self)\r\n 631 if self._sampler_iter is None:\r\n 632 # TODO(https://github.com/pytorch/pytorch/issues/76750)\r\n 633 self._reset() # type: ignore[call-arg]\r\n--> 634 data = self._next_data()\r\n 635 self._num_yielded += 1\r\n 636 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n 637 self._IterableDataset_len_called is not None and \\\r\n 638 self._num_yielded > self._IterableDataset_len_called:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:678, in _SingleProcessDataLoaderIter._next_data(self)\r\n 676 def _next_data(self):\r\n 677 index = self._next_index() # may raise StopIteration\r\n--> 678 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 679 if self._pin_memory:\r\n 680 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)\r\n 30 for _ in possibly_batched_index:\r\n 31 try:\r\n---> 32 data.append(next(self.dataset_iter))\r\n 33 except StopIteration:\r\n 34 self.ended = True\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)\r\n 1350 yield formatter.format_row(pa_table)\r\n 1351 return\r\n-> 1353 for key, example in ex_iterable:\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:956, in BufferShuffledExamplesIterable.__iter__(self)\r\n 954 # this is the shuffle buffer that we keep in memory\r\n 955 mem_buffer = []\r\n--> 956 for x in self.ex_iterable:\r\n 957 if len(mem_buffer) == buffer_size: # if the buffer is full, pick and example from it\r\n 958 i = next(indices_iterator)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:296, in ShuffledDataSourcesArrowExamplesIterable.__iter__(self)\r\n 294 for key, pa_table in self.generate_tables_fn(**kwargs_with_shuffled_shards):\r\n 295 for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER):\r\n--> 296 formatted_batch = formatter.format_batch(pa_subtable)\r\n 297 for example in _batch_to_examples(formatted_batch):\r\n 298 yield key, example\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/formatting.py:448, in PythonFormatter.format_batch(self, pa_table)\r\n 446 if self.lazy:\r\n 447 return LazyBatch(pa_table, self)\r\n--> 448 batch = self.python_arrow_extractor().extract_batch(pa_table)\r\n 449 batch = self.python_features_decoder.decode_batch(batch)\r\n 450 return batch\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/formatting.py:150, in PythonArrowExtractor.extract_batch(self, pa_table)\r\n 149 def extract_batch(self, pa_table: pa.Table) -> dict:\r\n--> 150 return pa_table.to_pydict()\r\n\r\nKeyboardInterrupt: \r\n```\r\n",
"Update: If i let it run, it eventually fails with:\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\nCell In[16], line 4\r\n 2 t = time.time()\r\n 3 iter_ = 0\r\n----> 4 for batch in train_dataloader:\r\n 5 #batch_proc = streaming_obj.collect_streaming_data_batch(batch)\r\n 6 iter_ += 1\r\n 8 if iter_ == 1:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:634, in _BaseDataLoaderIter.__next__(self)\r\n 631 if self._sampler_iter is None:\r\n 632 # TODO(https://github.com/pytorch/pytorch/issues/76750)\r\n 633 self._reset() # type: ignore[call-arg]\r\n--> 634 data = self._next_data()\r\n 635 self._num_yielded += 1\r\n 636 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n 637 self._IterableDataset_len_called is not None and \\\r\n 638 self._num_yielded > self._IterableDataset_len_called:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:678, in _SingleProcessDataLoaderIter._next_data(self)\r\n 676 def _next_data(self):\r\n 677 index = self._next_index() # may raise StopIteration\r\n--> 678 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 679 if self._pin_memory:\r\n 680 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)\r\n 30 for _ in possibly_batched_index:\r\n 31 try:\r\n---> 32 data.append(next(self.dataset_iter))\r\n 33 except StopIteration:\r\n 34 self.ended = True\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:1360, in IterableDataset.__iter__(self)\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n-> 1360 yield format_dict(example) if format_dict else example\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:85, in TorchFormatter.recursive_tensorize(self, data_struct)\r\n 84 def recursive_tensorize(self, data_struct: dict):\r\n---> 85 return map_nested(self._recursive_tensorize, data_struct, map_list=False)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:463, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 461 num_proc = 1\r\n 462 if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 463 mapped = [\r\n 464 _single_map_nested((function, obj, types, None, True, None))\r\n 465 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 466 ]\r\n 467 else:\r\n 468 mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:464, in <listcomp>(.0)\r\n 461 num_proc = 1\r\n 462 if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 463 mapped = [\r\n--> 464 _single_map_nested((function, obj, types, None, True, None))\r\n 465 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 466 ]\r\n 467 else:\r\n 468 mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:366, in _single_map_nested(args)\r\n 364 # Singleton first to spare some computation\r\n 365 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 366 return function(data_struct)\r\n 368 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 369 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:82, in TorchFormatter._recursive_tensorize(self, data_struct)\r\n 80 elif isinstance(data_struct, (list, tuple)):\r\n 81 return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\r\n---> 82 return self._tensorize(data_struct)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:68, in TorchFormatter._tensorize(self, value)\r\n 66 if isinstance(value, PIL.Image.Image):\r\n 67 value = np.asarray(value)\r\n---> 68 return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})\r\n\r\nRuntimeError: Could not infer dtype of decimal.Decimal\r\n```",
"PyTorch tensors cannot store `Decimal` objects. Casting the column with decimals to `float` should fix the issue.",
"I already have cast in collate_fn, in which I perform .astype(float) for each numerical field.\r\nOn the same instance, I installed a conda env with python 3.6, and this works well.\r\n\r\nSample:\r\n\r\n```\r\ndef streaming_data_collate_fn(batch):\r\n df = pd.DataFrame.from_dict(batch)\r\n feat_vals = torch.FloatTensor(np.nan_to_num(np.array(df[feats].astype(float))))\r\n\r\n```",
"`collate_fn` is applied after the `torch` formatting step, so I think the only option when working with an `IterableDataset` is to remove the `with_format` call and perform the conversion from Python values to PyTorch tensors in `collate_fn`. The standard `Dataset` supports `with_format(\"numpy\")`, which should make this conversion faster.",
"Thanks! \r\nPython 3.10 conda-env: After replacing with_format(\"torch\") with with_format(\"numpy\"), the error went away. However, it was still taking over 2 minutes to load a very small batch of 64 samples with num_workers set to 32. Once I removed with_format call altogether, it is finishing in 11 seconds.\r\n\r\nPython 3.6 based conda-env: When I switch the kernel , neither of the above work, and with_format(\"torch\") is the only thing that works, and executes in 1.6 seconds.\r\n\r\nI feel something else is also amiss here.",
"Can you share the `datasets` and `torch` versions installed in these conda envs?\r\n\r\n> Once I removed with_format call altogether, it is finishing in 11 seconds.\r\n\r\nHmm, that's surprising. What are your dataset's `.features`?",
"Python 3.6: \r\ndatasets.__version__ 2.4.0\r\ntorch.__version__ 1.10.1+cu102\r\n\r\nPython 3.10:\r\ndatasets.__version__ 2.14.0\r\ntorch.__version__ 2.0.0\r\n\r\nAnonymized features are of the form (subset shown here):\r\n{\r\n'string_feature_i': Value(dtype='string', id=None),\r\n'numerical_feature_i': Value(dtype='decimal128(38, 0)', id=None),\r\n'numerical_feature_series_i': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None),\r\n}\r\n\r\n\r\nThere is no output from .features in python 3.6 kernel BTW.",
"One more thing, in python 3.10 based kernel, interestingly increasing num_workers seem to be increasing the runtime of iterating I was trying out. In python 3.10 kernel execution, I do not even see multiple CPU cores spiking unlike in 3.6.\r\n\r\n512 batch size on 32 workers executes in 2.4 seconds on python 3.6 kernel, while it takes ~118 seconds on 3.10!",
"**Update**: It seems the latency part is more of a multiprocessing issue with torch and some host specific issue, and I had to scourge through relevant pytorch issues, when I stumbled across these threads:\r\n1. https://github.com/pytorch/pytorch/issues/102494\r\n2. https://github.com/pytorch/pytorch/issues/102269\r\n3. https://github.com/pytorch/pytorch/issues/99625\r\n\r\nOut of the suggested solutions, the one that worked in my case was:\r\n```\r\nos.environ['KMP_AFFINITY'] = \"disabled\"\r\n```\r\nIt is working for now, though I have no clue why, just I hope it does not get stuck when I do actual model training, will update by tomorrow.\r\n\r\n\r\n",
"I'm facing a similar situation in the local VS Code. \r\n\r\nDatasets version 2.14.4\r\nTorch 2.0.1+cu118\r\n\r\nSame code runs without issues in Colab\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"Supermaxman/esa-hubble\", streaming=True)\r\nsample = next(iter(dataset[\"train\"]))\r\n```\r\n\r\nis stuck for minutes. If I interrupt, I get\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[5], line 5\r\n 1 from datasets import load_dataset\r\n 3 dataset = load_dataset(\"Supermaxman/esa-hubble\", streaming=True)\r\n----> 5 sample = next(iter(dataset[\"train\"]))\r\n 6 print(sample[\"text\"])\r\n 7 sample[\"image\"]\r\n\r\nFile [~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353](https://file+.vscode-resource.vscode-cdn.net/home/osanseviero/Desktop/workspace/genai/nbs/~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353), in IterableDataset.__iter__(self)\r\n 1350 yield formatter.format_row(pa_table)\r\n 1351 return\r\n-> 1353 for key, example in ex_iterable:\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n\r\nFile [~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:255](https://file+.vscode-resource.vscode-cdn.net/home/osanseviero/Desktop/workspace/genai/nbs/~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:255), in ArrowExamplesIterable.__iter__(self)\r\n 253 def __iter__(self):\r\n 254 formatter = PythonFormatter()\r\n--> 255 for key, pa_table in self.generate_tables_fn(**self.kwargs):\r\n 256 for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER):\r\n...\r\n-> 1130 return self._sslobj.read(len, buffer)\r\n 1131 else:\r\n 1132 return self._sslobj.read(len)\r\n```",
"@osanseviero I assume the `self._sslobj.read(len, buffer)` line comes from the built-in `ssl` module, so this probably has something to do with your network. Please open a new issue with the full stack trace in case you haven't resolved this yet.",
"Thank you reporting this and sharing the solution, I ran into this as well!",
"Ran into same issue after upgrading to pytorch-2.0. Disabling KMP_AFFINITY as mentioned above worked for me. Thanks!\r\n"
] |
2023-07-26T14:52:37Z
|
2024-02-07T17:46:52Z
|
2023-07-30T14:09:06Z
|
NONE
| null | null | null | null |
### Describe the bug
I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment.
I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here?
### Steps to reproduce the bug
```
train_dataset = load_dataset(
"parquet", data_files = {'train': tr_data_path + '*.parquet'},
split = 'train',
collate_fn = streaming_data_collate_fn,
streaming = True
).with_format('torch')
train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0)
t = time.time()
iter_ = 0
for batch in train_dataloader:
iter_ += 1
if iter_ == 1000:
break
print (time.time() - t)
```
### Expected behavior
The snippet should work normally and load the next batch of data.
### Environment info
datasets: '2.14.0'
pyarrow: '12.0.0'
torch: '2.0.0'
Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0]
!uname -r
5.10.178-162.673.amzn2.x86_64
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4",
"events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}",
"followers_url": "https://api.github.com/users/arindamsarkar93/followers",
"following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}",
"gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arindamsarkar93",
"id": 5454868,
"login": "arindamsarkar93",
"node_id": "MDQ6VXNlcjU0NTQ4Njg=",
"organizations_url": "https://api.github.com/users/arindamsarkar93/orgs",
"received_events_url": "https://api.github.com/users/arindamsarkar93/received_events",
"repos_url": "https://api.github.com/users/arindamsarkar93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arindamsarkar93",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6079/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6078
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6078/events
|
https://github.com/huggingface/datasets/issues/6078
| 1,822,501,472
|
I_kwDODunzps5soSpg
| 6,078
|
resume_download with streaming=True
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4",
"events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}",
"followers_url": "https://api.github.com/users/NicolasMICAUX/followers",
"following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}",
"gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NicolasMICAUX",
"id": 72763959,
"login": "NicolasMICAUX",
"node_id": "MDQ6VXNlcjcyNzYzOTU5",
"organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs",
"received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events",
"repos_url": "https://api.github.com/users/NicolasMICAUX/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NicolasMICAUX",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ",
"Ok thank you for your answer",
"I'm closing this as a duplicate of #5380"
] |
2023-07-26T14:08:22Z
|
2023-07-28T11:05:03Z
|
2023-07-28T11:05:03Z
|
NONE
| null | null | null | null |
### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download from step 1_000_ยด000 without re-streaming all the first 1 million docs of the dataset?
`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.
### Steps to reproduce the bug
```
from datasets import load_dataset, DownloadConfig
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True, # optional
split="train",
download_config=DownloadConfig(resume_download=True)
)
# interupt the run and try to relaunch it => this restart from scratch
```
### Expected behavior
I would expect a parameter to start streaming from a given index in the dataset.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6077
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6077/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6077/events
|
https://github.com/huggingface/datasets/issues/6077
| 1,822,486,810
|
I_kwDODunzps5soPEa
| 6,077
|
Mapping gets stuck at 99%
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The `MAX_MAP_BATCH_SIZE = 1_000_000_000` hack is bad as it loads the entire dataset into RAM when performing `.map`. Instead, it's best to use `.iter(batch_size)` to iterate over the data batches and compute `mean` for each column. (`stddev` can be computed in another pass).\r\n\r\nAlso, these arrays are big, so it makes sense to reduce `batch_size`/`writer_batch_size` to avoid RAM issues and slow IO.",
"Hi @mariosasko !\r\n\r\nI agree, it's an ugly hack, but it was convenient since the resulting `mean_std` could be cached by the library. For my large dataset (which doesn't fit in RAM), I'm actually using something similar to what you suggested. I got rid of the first mapping in the above scripts and replaced it with an iterator, but the issue with the second mapping still persists.",
"Have you tried to reduce `batch_size`/`writer_batch_size` in the 2nd `.map`? Also, can you interrupt the process when it gets stuck and share the error stack trace?",
"I think `batch_size/writer_batch_size` is already at its lowest in the 2nd `.map` since `batched=False` implies `batch_size=1` and `len(ds) = 1000 = writer_batch_size`.\r\n\r\nHere is also a bunch of stack traces when I interrupted the process:\r\n\r\n<details>\r\n <summary>stack trace 1</summary>\r\n\r\n```python\r\n(pyg)[d623204@rosetta-bigviz01 stage-laurent-f]$ python src/random_scripts/uses_random_data.py \r\nFound cached dataset random_data (/local_scratch/lfainsin/.cache/huggingface/datasets/random_data/default/0.0.0/444e214e1d0e6298cfd3f2368323ec37073dc1439f618e19395b1f421c69b066)\r\nApplying mean/std: 97%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 967/1000 [00:01<00:00, 534.87 examples/s]Traceback (most recent call last): \r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3449, in _map_single\r\n writer.write(example)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 490, in write\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 263, in _cast_to_python_objects\r\n def _cast_to_python_objects(obj: Any, only_1d_for_numpy: bool, optimize_list_casting: bool) -> Tuple[Any, bool]:\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs_new/data/users/lfainsin/stage-laurent-f/src/random_scripts/uses_random_data.py\", line 62, in <module>\r\n ds_normalized = ds.map(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3492, in _map_single\r\n writer.finalize()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 584, in finalize\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in <listcomp>\r\n [\r\nKeyboardInterrupt\r\n```\r\n\r\n</details>\r\n\r\n<details>\r\n <summary>stack trace 2</summary>\r\n\r\n```python\r\n(pyg)[d623204@rosetta-bigviz01 stage-laurent-f]$ python src/random_scripts/uses_random_data.py \r\nFound cached dataset random_data (/local_scratch/lfainsin/.cache/huggingface/datasets/random_data/default/0.0.0/444e214e1d0e6298cfd3f2368323ec37073dc1439f618e19395b1f421c69b066)\r\nApplying mean/std: 99%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 988/1000 [00:20<00:00, 526.19 examples/s]Applying mean/std: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 999/1000 [00:21<00:00, 9.66 examples/s]Traceback (most recent call last): \r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3449, in _map_single\r\n writer.write(example)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 490, in write\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 263, in _cast_to_python_objects\r\n def _cast_to_python_objects(obj: Any, only_1d_for_numpy: bool, optimize_list_casting: bool) -> Tuple[Any, bool]:\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs_new/data/users/lfainsin/stage-laurent-f/src/random_scripts/uses_random_data.py\", line 62, in <module>\r\n ds_normalized = ds.map(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3492, in _map_single\r\n writer.finalize()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 584, in finalize\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 291, in _cast_to_python_objects\r\n if config.JAX_AVAILABLE and \"jax\" in sys.modules:\r\nKeyboardInterrupt\r\n```\r\n\r\n</details>\r\n\r\n<details>\r\n <summary>stack trace 3</summary>\r\n\r\n```python\r\n(pyg)[d623204@rosetta-bigviz01 stage-laurent-f]$ python src/random_scripts/uses_random_data.py \r\nFound cached dataset random_data (/local_scratch/lfainsin/.cache/huggingface/datasets/random_data/default/0.0.0/444e214e1d0e6298cfd3f2368323ec37073dc1439f618e19395b1f421c69b066)\r\nApplying mean/std: 99%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 989/1000 [00:01<00:00, 504.80 examples/s]Traceback (most recent call last): \r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3449, in _map_single\r\n writer.write(example)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 490, in write\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs_new/data/users/lfainsin/stage-laurent-f/src/random_scripts/uses_random_data.py\", line 62, in <module>\r\n ds_normalized = ds.map(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3492, in _map_single\r\n writer.finalize()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 584, in finalize\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 298, in _cast_to_python_objects\r\n if obj.ndim == 0:\r\nKeyboardInterrupt\r\n```\r\n\r\n</details>\r\n",
"Same issue by following code:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torchvision.transforms import transforms\r\n\r\npath = \"~/dataset/diffusiondb50k\" # path maybe not necessary\r\ndataset = load_dataset(\"poloclub/diffusiondb\", \"2m_first_1k\", data_dir=path)\r\n\r\ntransform = transforms.Compose([transforms.ToTensor()])\r\ndataset = dataset.map(\r\n lambda x: {\r\n 'image': transform(x['image']),\r\n 'prompt': x['prompt'],\r\n 'width': x['width'],\r\n 'height': x['height'],\r\n }, \r\n # num_proc=4,\r\n)\r\ndataset\r\n```\r\n\r\nAnd the `dataset.map()` stucks at `Map:โโ99%โ986/1000โ[00:07<00:00,โ145.72โexamples/s]`.\r\n\r\nAlso, there is 1 process left in `htop` with 100% CPU usage. And if I add `num_proc=4,`, there will be 4 same processes left.\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.15.0\r\n- Python version: 3.12.2\r\n- Platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39",
"Hi @zmoki688, I've noticed since that it's pretty common for disk writes to lag behind the operations performed by the `map` operator (especially when the data is large and the operations are cheap). Since the progress bar doesn't seem to account for the writes, it speeds up to 99% but wait until all writes are done. At least that's what I think happens when monitoring my disks I/O (with `iotop` and the likes)"
] |
2023-07-26T14:00:40Z
|
2024-07-22T12:28:06Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
Hi !
I'm currently working with a large (~150GB) unnormalized dataset at work.
The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it.
I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset.
The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why.
Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me.
### Steps to reproduce the bug
I'm able to reproduce the problem using the following scripts:
```python
# random_data.py
import datasets
import torch
_VERSION = "1.0.0"
class RandomDataset(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo(
version=_VERSION,
supervised_keys=None,
features=datasets.Features(
{
"positions": datasets.Array2D(
shape=(30000, 3),
dtype="float32",
),
"normals": datasets.Array2D(
shape=(30000, 3),
dtype="float32",
),
"features": datasets.Array2D(
shape=(30000, 6),
dtype="float32",
),
"scalars": datasets.Sequence(
feature=datasets.Value("float32"),
length=20,
),
},
),
)
def _split_generators(self, dl_manager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, # type: ignore
gen_kwargs={"nb_samples": 1000},
),
datasets.SplitGenerator(
name=datasets.Split.TEST, # type: ignore
gen_kwargs={"nb_samples": 100},
),
]
def _generate_examples(self, nb_samples: int):
for idx in range(nb_samples):
yield idx, {
"positions": torch.randn(30000, 3),
"normals": torch.randn(30000, 3),
"features": torch.randn(30000, 6),
"scalars": torch.randn(20),
}
```
```python
# main.py
import datasets
import torch
def apply_mean_std(
dataset: datasets.Dataset,
means: dict[str, torch.Tensor],
stds: dict[str, torch.Tensor],
) -> dict[str, torch.Tensor]:
"""Normalize the dataset using the mean and standard deviation of each feature.
Args:
dataset (`Dataset`): A huggingface dataset.
mean (`dict[str, Tensor]`): A dictionary containing the mean of each feature.
std (`dict[str, Tensor]`): A dictionary containing the standard deviation of each feature.
Returns:
dict: A dictionary containing the normalized dataset.
"""
result = {}
for key in means.keys():
# extract data from dataset
data: torch.Tensor = dataset[key] # type: ignore
# extract mean and std from dict
mean = means[key] # type: ignore
std = stds[key] # type: ignore
# normalize data
normalized_data = (data - mean) / std
result[key] = normalized_data
return result
# get dataset
ds = datasets.load_dataset(
path="random_data.py",
split="train",
).with_format("torch")
# compute mean (along last axis)
means = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names}
means_sq = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names}
for batch in ds.iter(batch_size=8):
for key in ds.column_names:
data = batch[key]
batch_size = data.shape[0]
data = data.reshape(-1, data.shape[-1])
means[key] += data.mean(dim=0) / len(ds) * batch_size
means_sq[key] += (data**2).mean(dim=0) / len(ds) * batch_size
# compute std (along last axis)
stds = {key: torch.sqrt(means_sq[key] - means[key] ** 2) for key in ds.column_names}
# normalize each feature of the dataset
ds_normalized = ds.map(
desc="Applying mean/std", # type: ignore
function=apply_mean_std,
batched=False,
fn_kwargs={
"means": means,
"stds": stds,
},
)
```
### Expected behavior
Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6077/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6075
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6075/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6075/events
|
https://github.com/huggingface/datasets/issues/6075
| 1,822,341,398
|
I_kwDODunzps5snrkW
| 6,075
|
Error loading music files using `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/susnato",
"id": 56069179,
"login": "susnato",
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"repos_url": "https://api.github.com/users/susnato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/susnato",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.",
"I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!"
] |
2023-07-26T12:44:05Z
|
2023-07-26T13:08:08Z
|
2023-07-26T13:08:08Z
|
NONE
| null | null | null | null |
### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem
formatted_output = format_table(
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__
return self.format_column(pa_table)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read
with SoundFile(file, 'r', samplerate, channels,
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__
self._file = self._open(file, mode_int, closefd)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open
_error_check(_snd.sf_error(file_ptr),
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check
raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised.
```
### Steps to reproduce the bug
Code to reproduce the error -
```python
from datasets import load_dataset
ds = load_dataset("susnato/pop2piano_real_music_test", split="test")
print(ds[0])
```
### Expected behavior
I should be able to read the music file without any error.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/susnato",
"id": 56069179,
"login": "susnato",
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"repos_url": "https://api.github.com/users/susnato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/susnato",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6075/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6073
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6073/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6073/events
|
https://github.com/huggingface/datasets/issues/6073
| 1,822,167,804
|
I_kwDODunzps5snBL8
| 6,073
|
version2.3.2 load_dataset()data_files can't include .xxxx in path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4",
"events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}",
"followers_url": "https://api.github.com/users/BUAAChuanWang/followers",
"following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}",
"gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BUAAChuanWang",
"id": 45893496,
"login": "BUAAChuanWang",
"node_id": "MDQ6VXNlcjQ1ODkzNDk2",
"organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs",
"received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events",
"repos_url": "https://api.github.com/users/BUAAChuanWang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BUAAChuanWang",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)."
] |
2023-07-26T11:09:31Z
|
2023-08-29T15:53:59Z
|
2023-08-29T15:53:59Z
|
NONE
| null | null | null | null |
### Describe the bug
First, I cd workdir.
Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
that couldn't work and
<FileNotFoundError: Unable to find
'/a/b/c/.d/train/train.jsonl' at
/a/b/c/.d/>
And I debug, it is fine in version2.1.2
So there maybe a bug in path join.
Here is the whole bug report:
/x/datasets/loa โ
โ d.py:1656 in load_dataset โ
โ โ
โ 1653 โ ignore_verifications = ignore_verifications or save_infos โ
โ 1654 โ โ
โ 1655 โ # Create a dataset builder โ
โ โฑ 1656 โ builder_instance = load_dataset_builder( โ
โ 1657 โ โ path=path, โ
โ 1658 โ โ name=name, โ
โ 1659 โ โ data_dir=data_dir, โ
โ โ
โ x/datasets/loa โ
โ d.py:1439 in load_dataset_builder โ
โ โ
โ 1436 โ if use_auth_token is not None: โ
โ 1437 โ โ download_config = download_config.copy() if download_config e โ
โ 1438 โ โ download_config.use_auth_token = use_auth_token โ
โ โฑ 1439 โ dataset_module = dataset_module_factory( โ
โ 1440 โ โ path, โ
โ 1441 โ โ revision=revision, โ
โ 1442 โ โ download_config=download_config, โ
โ โ
โ x/datasets/loa โ
โ d.py:1097 in dataset_module_factory โ
โ โ
โ 1094 โ โ
โ 1095 โ # Try packaged โ
โ 1096 โ if path in _PACKAGED_DATASETS_MODULES: โ
โ โฑ 1097 โ โ return PackagedDatasetModuleFactory( โ
โ 1098 โ โ โ path, โ
โ 1099 โ โ โ data_dir=data_dir, โ
โ 1100 โ โ โ data_files=data_files, โ
โ โ
โx/datasets/loa โ
โ d.py:743 in get_module โ
โ โ
โ 740 โ โ โ if self.data_dir is not None โ
โ 741 โ โ โ else get_patterns_locally(str(Path().resolve())) โ
โ 742 โ โ ) โ
โ โฑ 743 โ โ data_files = DataFilesDict.from_local_or_remote( โ
โ 744 โ โ โ patterns, โ
โ 745 โ โ โ use_auth_token=self.download_config.use_auth_token, โ
โ 746 โ โ โ base_path=str(Path(self.data_dir).resolve()) if self.data โ
โ โ
โ x/datasets/dat โ
โ a_files.py:590 in from_local_or_remote โ
โ โ
โ 587 โ โ out = cls() โ
โ 588 โ โ for key, patterns_for_key in patterns.items(): โ
โ 589 โ โ โ out[key] = ( โ
โ โฑ 590 โ โ โ โ DataFilesList.from_local_or_remote( โ
โ 591 โ โ โ โ โ patterns_for_key, โ
โ 592 โ โ โ โ โ base_path=base_path, โ
โ 593 โ โ โ โ โ allowed_extensions=allowed_extensions, โ
โ โ
โ /x/datasets/dat โ
โ a_files.py:558 in from_local_or_remote โ
โ โ
โ 555 โ โ use_auth_token: Optional[Union[bool, str]] = None, โ
โ 556 โ ) -> "DataFilesList": โ
โ 557 โ โ base_path = base_path if base_path is not None else str(Path() โ
โ โฑ 558 โ โ data_files = resolve_patterns_locally_or_by_urls(base_path, pa โ
โ 559 โ โ origin_metadata = _get_origin_metadata_locally_or_by_urls(data โ
โ 560 โ โ return cls(data_files, origin_metadata) โ
โ 561 โ
โ โ
โ /x/datasets/dat โ
โ a_files.py:195 in resolve_patterns_locally_or_by_urls โ
โ โ
โ 192 โ โ if is_remote_url(pattern): โ
โ 193 โ โ โ data_files.append(Url(pattern)) โ
โ 194 โ โ else: โ
โ โฑ 195 โ โ โ for path in _resolve_single_pattern_locally(base_path, pat โ
โ 196 โ โ โ โ data_files.append(path) โ
โ 197 โ โ
โ 198 โ if not data_files: โ
โ โ
โ /x/datasets/dat โ
โ a_files.py:145 in _resolve_single_pattern_locally โ
โ โ
โ 142 โ โ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r โ
โ 143 โ โ if allowed_extensions is not None: โ
โ 144 โ โ โ error_msg += f" with any supported extension {list(allowed โ
โ โฑ 145 โ โ raise FileNotFoundError(error_msg) โ
โ 146 โ return sorted(out) โ
โ 147
### Steps to reproduce the bug
1. Version=2.3.2
2. In shell, cd workdir.(cd /a/b/c/.d/)
3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
### Expected behavior
fix it please~
### Environment info
2.3.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6073/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6071
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6071/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6071/events
|
https://github.com/huggingface/datasets/issues/6071
| 1,821,990,749
|
I_kwDODunzps5smV9d
| 6,071
|
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much ๐ \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a `fsspec.implementations.arrow.ArrowFSWrapper` [to make it](https://arrow.apache.org/docs/python/filesystems.html#using-arrow-filesystems-with-fsspec) `fsspec` compatible). I also register it as an entrypoint with `fsspec` so that it's the one that gets automatically resolved when looking for filesystems for the `s3` protocol\r\n\r\nIn my case the `storage_option` that seemed not getting piped through was the filesystem's `endpoint_override` that I use in some tests to point at a mock S3 bucket"
] |
2023-07-26T09:37:20Z
|
2023-07-27T12:42:58Z
|
2023-07-27T12:42:58Z
|
NONE
| null | null | null | null |
### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6071/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6069
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6069/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6069/events
|
https://github.com/huggingface/datasets/issues/6069
| 1,820,831,535
|
I_kwDODunzps5sh68v
| 6,069
|
KeyError: dataset has no key "image"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/etetteh",
"id": 28512232,
"login": "etetteh",
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"repos_url": "https://api.github.com/users/etetteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/etetteh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n",
"This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef resize(examples):\r\n examples[\"pixel_values\"] = [image.convert(\"RGB\").resize((300, 300)) for image in examples[\"image\"]]\r\n return examples\r\n\r\ndef preprocess_train(example_batch):\r\n print(f\"Example batch: \\n{example_batch}\")\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\nimage_dataset = image_dataset.map(resize, remove_columns=[\"image\"], batched=True)\r\n\r\nimage_dataset[\"train\"].set_transform(preprocess_train)\r\nimage_dataset[\"validation\"].set_transform(preprocess_val)\r\n```\r\n\r\nWhen I print ds.column_names I get the following\r\n`{'train': ['image', 'label'], 'validation': ['image', 'label'], 'test': ['image', 'label']}`\r\n\r\nThe `print(f\"Example batch: \\n{example_batch}\")` in the `preprocess_train` function outputs only labels without images:\r\n```\r\nExample batch: \r\n{'label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]}\r\n```\r\n\r\nThe weird part of it all is that a sample code runs in a jupyter lab notebook without any bugs, but when I run my scripts from the terminal I get the bug. The same code.",
"The `remove_columns=[\"image\"]` argument in the `.map` call removes the `image` column from the output, so drop this argument to preserve it.",
"The problem is not with the removal of the image key. The bug is why only the labels are sent to be process, instead of all the featues or dictionary keys.\r\n\r\nP.S. I just dropped the removal argument as you've suggested, but that didn't solve the problem, because only the labels are being sent to be processed",
"All the `image_dataset.column_names` after the `map` call should also be present in `preprocess_train `/`preprocess_val` unless (input) `columns` in `set_transform` are specified.\r\n\r\nIf that's not the case, we need a full reproducer (not snippets) with the environment info.",
"I have resolved the error after including a collate function as indicated in the Quick Start session of the Datasets docs.:\r\n\r\nHere is what I did:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef preprocess_train(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"image\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"image\"]\r\n ]\r\n return example_batch\r\n\r\ndef collate_fn(examples):\r\n images = []\r\n labels = []\r\n for example in examples:\r\n images.append((example[\"pixel_values\"]))\r\n labels.append(example[\"label\"])\r\n\r\n pixel_values = torch.stack(images)\r\n labels = torch.tensor(labels)\r\n return {\"pixel_values\": pixel_values, \"label\": labels}\r\n\r\ntrain_dataset = image_dataset[\"train\"].with_transform(preprocess_train)\r\nval_dataset = image_dataset[\"validation\"].with_transform(preprocess_val)\r\n\r\nimage_datasets = {\r\n \"train\": train_dataset,\r\n \"val\": val_dataset\r\n}\r\n\r\nsamplers = {\r\n \"train\": data.RandomSampler(train_dataset),\r\n \"val\": data.SequentialSampler(val_dataset),\r\n}\r\n\r\ndataloaders = {\r\n x: data.DataLoader(\r\n image_datasets[x],\r\n collate_fn=collate_fn,\r\n batch_size=batch_size,\r\n sampler=samplers[x],\r\n num_workers=args.num_workers,\r\n worker_init_fn=utils.set_seed_for_worker,\r\n generator=g,\r\n pin_memory=True,\r\n )\r\n for x in [\"train\", \"val\"]\r\n}\r\n\r\ntrain_loader, val_loader = dataloaders[\"train\"], dataloaders[\"val\"]\r\n```\r\nEverything runs fine without any bug now. ",
"are you using hf Trainer? hf trainer will remove columns not used in model.forward. set `remove_unused_columns=False` might works"
] |
2023-07-25T17:45:50Z
|
2024-09-06T08:16:16Z
|
2023-07-27T12:42:17Z
|
NONE
| null | null | null | null |
### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function.
For some reason, the images are not in the example batches.
### Steps to reproduce the bug
I'm using the latest stable version of datasets
### Expected behavior
I expect the example_batches to contain both images and labels
### Environment info
I'm using the latest stable version of datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/etetteh",
"id": 28512232,
"login": "etetteh",
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"repos_url": "https://api.github.com/users/etetteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/etetteh",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6069/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6066
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6066/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6066/events
|
https://github.com/huggingface/datasets/issues/6066
| 1,819,717,542
|
I_kwDODunzps5sdq-m
| 6,066
|
AttributeError: '_tqdm_cls' object has no attribute '_lock'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! I opened https://github.com/huggingface/datasets/pull/6067 to add the missing `_lock`\r\n\r\nWe'll do a patch release soon, but feel free to install `datasets` from source in the meantime",
"I have tested the latest main, it does not work.\r\n\r\nI add more logs to reproduce this issue, it looks like a multi threading bug:\r\n\r\n```python\r\n@contextmanager\r\ndef ensure_lock(tqdm_class, lock_name=\"\"):\r\n \"\"\"get (create if necessary) and then restore `tqdm_class`'s lock\"\"\"\r\n import os\r\n import threading\r\n print(os.getpid(), threading.get_ident(), \"ensure_lock\", tqdm_class, lock_name)\r\n old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock\r\n lock = old_lock or tqdm_class.get_lock() # maybe create a new lock\r\n lock = getattr(lock, lock_name, lock) # maybe subtype\r\n tqdm_class.set_lock(lock)\r\n print(os.getpid(), threading.get_ident(), \"set_lock\")\r\n yield lock\r\n if old_lock is None:\r\n print(os.getpid(), threading.get_ident(), \"del tqdm_class\")\r\n del tqdm_class._lock\r\n else:\r\n tqdm_class.set_lock(old_lock)\r\n```\r\noutput\r\n```\r\n64943 8424758784 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 8424758784 set_lock\r\n64943 8424758784 del tqdm_class\r\n64943 8424758784 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 8424758784 set_lock\r\n64943 8424758784 del tqdm_class\r\n64943 11638370304 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11638370304 set_lock\r\n64943 11568967680 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11568967680 set_lock\r\n64943 11638370304 del tqdm_class\r\n64943 11638370304 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11638370304 set_lock\r\n64943 11638370304 del tqdm_class\r\n64943 11568967680 del tqdm_class\r\n```\r\n\r\nThread `11638370304` del the _lock from tqdm_class first, then thread `11568967680` del _lock failed.",
"Maybe it is a bug of tqdm? I think simply use `try ... except AttributeError ...` wraps `del tqdm_class._lock` should work.",
"Yes it looks like a bug on their end indeed, do you want to open a PR on tqdm ?\r\n\r\nLet me see if I can find a workaround in the meantime",
"I opened https://github.com/huggingface/datasets/pull/6068 if you want to try it out",
"> I opened #6068 if you want to try it out\r\n\r\nThis fix works! Thanks.",
"Awesome ! closing this then :)\r\nWe'll do a patch release today or tomorrow"
] |
2023-07-25T07:24:36Z
|
2023-07-26T10:56:25Z
|
2023-07-26T10:56:24Z
|
NONE
| null | null | null | null |
### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns
DataFilesList.from_patterns(
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map
with ensure_lock(tqdm_class, lock_name=lock_name) as lk:
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock
del tqdm_class._lock
^^^^^^^^^^^^^^^^
AttributeError: '_tqdm_cls' object has no attribute '_lock'
```
### Steps to reproduce the bug
Happens ocasionally.
### Expected behavior
I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print.
According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24
```python
@contextmanager
def ensure_lock(tqdm_class, lock_name=""):
"""get (create if necessary) and then restore `tqdm_class`'s lock"""
print("ensure_lock", tqdm_class, lock_name)
old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock
lock = old_lock or tqdm_class.get_lock() # maybe create a new lock
lock = getattr(lock, lock_name, lock) # maybe subtype
tqdm_class.set_lock(lock)
yield lock
if old_lock is None:
del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class.
else:
tqdm_class.set_lock(old_lock)
```
But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205
```python
class _tqdm_cls:
def __call__(self, *args, disable=False, **kwargs):
if _tqdm_active and not disable:
return tqdm_lib.tqdm(*args, **kwargs)
else:
return EmptyTqdm(*args, **kwargs)
def set_lock(self, *args, **kwargs):
self._lock = None
if _tqdm_active:
return tqdm_lib.tqdm.set_lock(*args, **kwargs)
def get_lock(self):
if _tqdm_active:
return tqdm_lib.tqdm.get_lock()
```
### Environment info
Python 3.11.4
tqdm '4.65.0'
datasets master
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6066/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6060
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6060/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6060/events
|
https://github.com/huggingface/datasets/issues/6060
| 1,816,614,120
|
I_kwDODunzps5sR1To
| 6,060
|
Dataset.map() execute twice when in PyTorch DDP mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanghaoyucn",
"id": 39429965,
"login": "wanghaoyucn",
"node_id": "MDQ6VXNlcjM5NDI5OTY1",
"organizations_url": "https://api.github.com/users/wanghaoyucn/orgs",
"received_events_url": "https://api.github.com/users/wanghaoyucn/received_events",
"repos_url": "https://api.github.com/users/wanghaoyucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanghaoyucn",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Sorry for asking a duplicate question about `num_proc`, I searched the forum and find the solution.\r\n\r\nBut I still can't make the trick with `torch.distributed.barrier()` to only map at the main process work. The [post on forum]( https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or-fairscale/7229/7) didn't help.",
"If it does the `map` twice then it means the hash of your map function is not some same between your two processes.\r\n\r\nCan you make sure your map functions have the same hash in different processes ?\r\n\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nprint(Hasher.hash(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True)))\r\nprint(Hasher.hash(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16)))\r\n```\r\n\r\nYou can also set the fingerprint used to reload the resulting dataset by passing `new_finegrprint=` in `map`, see https://huggingface.co/docs/datasets/v2.13.1/en/about_cache#the-cache. This will force the different processes to use the same fingerprint used to locate the resulting dataset in the cache.",
"Thanks for help! I find the fingerprint between processes don't have same hash:\r\n```\r\nRank 0: Gpu 0 cut_reorder_keys fingerprint c7f47f40e9a67657\r\nRank 0: Gpu 0 random_shift fingerprint 240a0ce79831e7d4\r\n\r\nRank 1: Gpu 1 cut_reorder_keys fingerprint 20edd3d9cf284001\r\nRank 1: Gpu 1 random_shift fingerprint 819f7c1c18e7733f\r\n```\r\nBut my functions only process the example one by one and don't need rank or other arguments. After all it can work in the test for dataset and dataloader.\r\nI'll try to set `new_fingerprint` to see if it works and figure out the reason of different hash.",
"I finally figure it out. The fingerprint of the function will change if other key-value pairs change in `args` even the `args.num_stations_list` is not changed.\r\n\r\n```python\r\nlambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True)\r\n```\r\n\r\nMy `args` contains the key `rank` which refers the rank of its GPU, so the fingerprints change among the GPUs.\r\nI use `partial` in `functools` to generate a partial function that fixs the argument `num_stations_list=args.num_stations_list`, and the fingerprint of this partial function keeps among the GPUs. Finally I can reuse the mapped cache."
] |
2023-07-22T05:06:43Z
|
2024-01-22T18:35:12Z
|
2024-01-22T18:35:12Z
|
NONE
| null | null | null | null |
### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same.
And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither.
I have tried to use `rank` and `local_rank` to check, they all didn't make sense.
### Steps to reproduce the bug
use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run
This is my code:
```python
if args.distributed and world_size > 1:
if args.local_rank > 0:
print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True)
torch.distributed.barrier()
print("Mapping dataset")
dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys")
dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift")
dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys")
if args.local_rank == 0:
print("Mapping finished, loading results from main process")
torch.distributed.barrier()
```
### Expected behavior
Only the main process will execute `map`, while the sub process will load cache from disk.
### Environment info
server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090
- `python==3.9.16`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `22.04.1-Ubuntu`
server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090
- `python==3.9.0`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `Ubuntu 20.04`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanghaoyucn",
"id": 39429965,
"login": "wanghaoyucn",
"node_id": "MDQ6VXNlcjM5NDI5OTY1",
"organizations_url": "https://api.github.com/users/wanghaoyucn/orgs",
"received_events_url": "https://api.github.com/users/wanghaoyucn/received_events",
"repos_url": "https://api.github.com/users/wanghaoyucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanghaoyucn",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6060/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6059
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6059/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6059/events
|
https://github.com/huggingface/datasets/issues/6059
| 1,816,537,176
|
I_kwDODunzps5sRihY
| 6,059
|
Provide ability to load label mappings from file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"I would like this also as I have been working with a dataset with hierarchical classes. In fact, I encountered this very issue when trying to define the dataset with a script. I couldn't find a work around and reverted to hard coding the class names in the readme yaml.\r\n\r\n@david-waterworth do you envision also being able to define the hierarchical structure of the classes?",
"@danielduckworth yes I did need to do that (but I ended up ditching datasets as it looks like this is a \"wont fix\"). ",
"@david-waterworth Hmm, that's a shame. What are you using now? Also, Iโm curious to know about the work youโre doing that involves hierarchical classes, if you donโt mind sharing."
] |
2023-07-22T02:04:19Z
|
2024-04-16T08:07:55Z
| null |
NONE
| null | null | null | null |
### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file.
It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo`
I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred.
The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it.
```
class TestDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["label_1", "label_2"]),
}
),
task_templates=[TextClassification(text_column="text", label_column="label")],
)
def _split_generators(self, dl_manager):
train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
]
def _generate_examples(self, filepath):
"""Generate AG News examples."""
with open(filepath, encoding="utf-8") as csv_file:
csv_reader = csv.DictReader(csv_file)
for id_, row in enumerate(csv_reader):
yield id_, row
```
### Motivation
Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset.
### Your contribution
I'm willing to work on a PR with guidence.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6059/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6058
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6058/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6058/events
|
https://github.com/huggingface/datasets/issues/6058
| 1,815,131,397
|
I_kwDODunzps5sMLUF
| 6,058
|
laion-coco download error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4",
"events_url": "https://api.github.com/users/yangyijune/events{/privacy}",
"followers_url": "https://api.github.com/users/yangyijune/followers",
"following_url": "https://api.github.com/users/yangyijune/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangyijune",
"id": 54424110,
"login": "yangyijune",
"node_id": "MDQ6VXNlcjU0NDI0MTEw",
"organizations_url": "https://api.github.com/users/yangyijune/orgs",
"received_events_url": "https://api.github.com/users/yangyijune/received_events",
"repos_url": "https://api.github.com/users/yangyijune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangyijune",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This can also mean one of the files was not downloaded correctly.\r\n\r\nWe log an erroneous file's name before raising the reader's error, so this is how you can find the problematic file. Then, you should delete it and call `load_dataset` again.\r\n\r\n(I checked all the uploaded files, and they seem to be valid Parquet files, so I don't think this is a bug on their side)\r\n"
] |
2023-07-21T04:24:15Z
|
2023-07-22T01:42:06Z
|
2023-07-22T01:42:06Z
|
NONE
| null | null | null | null |
### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion--
laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd...
Downloading data: 100%|โ| 1.89G/1.89G [04:57<00:00,
Downloading data files: 100%|โ| 1/1 [04:59<00:00, 2
Extracting data files: 100%|โ| 1/1 [00:00<00:00, 13
Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader
name='/home/bian/.cache/huggingface/datasets/downlo
ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154
0cb9234572927f04c'>
Traceback (most recent call last):
File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module>
dataset = load_dataset("laion/laion-coco", ignore_verifications=True)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single
generator = self._generate_tables(**gen_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in
_generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file
.
```
I have carefully followed the instructions in #5264 but still get the same error.
Other helpful information:
```
ds = load_dataset("parquet", data_files=
...: "https://huggingface.co/datasets/laion/l
...: aion-coco/resolve/d22869de3ccd39dfec1507
...: f7ded32e4a518dad24/part-00000-2256f782-1
...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p
...: arquet")
Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f)
100%|โโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 4.55it/s]
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False)
```
### Expected behavior
Properly load Laion-coco dataset
### Environment info
datasets==2.11.0 torch==1.12.1 python 3.10
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4",
"events_url": "https://api.github.com/users/yangyijune/events{/privacy}",
"followers_url": "https://api.github.com/users/yangyijune/followers",
"following_url": "https://api.github.com/users/yangyijune/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangyijune",
"id": 54424110,
"login": "yangyijune",
"node_id": "MDQ6VXNlcjU0NDI0MTEw",
"organizations_url": "https://api.github.com/users/yangyijune/orgs",
"received_events_url": "https://api.github.com/users/yangyijune/received_events",
"repos_url": "https://api.github.com/users/yangyijune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangyijune",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6058/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6057
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6057/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6057/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6057/events
|
https://github.com/huggingface/datasets/issues/6057
| 1,815,100,151
|
I_kwDODunzps5sMDr3
| 6,057
|
Why is the speed difference of gen example so big?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/46072190?v=4",
"events_url": "https://api.github.com/users/pixeli99/events{/privacy}",
"followers_url": "https://api.github.com/users/pixeli99/followers",
"following_url": "https://api.github.com/users/pixeli99/following{/other_user}",
"gists_url": "https://api.github.com/users/pixeli99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pixeli99",
"id": 46072190,
"login": "pixeli99",
"node_id": "MDQ6VXNlcjQ2MDcyMTkw",
"organizations_url": "https://api.github.com/users/pixeli99/orgs",
"received_events_url": "https://api.github.com/users/pixeli99/received_events",
"repos_url": "https://api.github.com/users/pixeli99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pixeli99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pixeli99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pixeli99",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi!\r\n\r\nIt's hard to explain this behavior without more information. Can you profile the slower version with the following code\r\n```python\r\nimport cProfile, pstats\r\nfrom datasets import load_dataset\r\n\r\nwith cProfile.Profile() as profiler:\r\n ds = load_dataset(...)\r\n\r\nstats = pstats.Stats(profiler).sort_stats(\"cumtime\")\r\nstats.print_stats()\r\n```\r\nand share the output?"
] |
2023-07-21T03:34:49Z
|
2023-10-04T18:06:16Z
|
2023-10-04T18:06:15Z
|
NONE
| null | null | null | null |
```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('text_content')
image_data = open(image_path, "rb").read()
yield idx, {
"text": text_content,
"image": {
"path": image_path,
"bytes": image_data,
},
"conditioning_image": {
"path": image_path,
"bytes": image_data,
},
}
```
Hello,
I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**

I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6057/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6057/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6055
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6055/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6055/events
|
https://github.com/huggingface/datasets/issues/6055
| 1,813,524,145
|
I_kwDODunzps5sGC6x
| 6,055
|
Fix host URL in The Pile datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7540752?v=4",
"events_url": "https://api.github.com/users/nickovchinnikov/events{/privacy}",
"followers_url": "https://api.github.com/users/nickovchinnikov/followers",
"following_url": "https://api.github.com/users/nickovchinnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/nickovchinnikov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nickovchinnikov",
"id": 7540752,
"login": "nickovchinnikov",
"node_id": "MDQ6VXNlcjc1NDA3NTI=",
"organizations_url": "https://api.github.com/users/nickovchinnikov/orgs",
"received_events_url": "https://api.github.com/users/nickovchinnikov/received_events",
"repos_url": "https://api.github.com/users/nickovchinnikov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nickovchinnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickovchinnikov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nickovchinnikov",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2023-07-20T09:08:52Z
|
2023-07-20T09:09:37Z
| null |
NONE
| null | null | null | null |
### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
### Steps to reproduce the bug
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
And
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
### Expected behavior
Downloading as normal.
### Environment info
Environment info
`datasets` version: 2.9.0
Platform: Windows
Python version: 3.9.13
| null |
{
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6055/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6054
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6054/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6054/events
|
https://github.com/huggingface/datasets/issues/6054
| 1,813,271,304
|
I_kwDODunzps5sFFMI
| 6,054
|
Multi-processed `Dataset.map` slows down a lot when `import torch`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4",
"events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}",
"followers_url": "https://api.github.com/users/ShinoharaHare/followers",
"following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}",
"gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShinoharaHare",
"id": 47121592,
"login": "ShinoharaHare",
"node_id": "MDQ6VXNlcjQ3MTIxNTky",
"organizations_url": "https://api.github.com/users/ShinoharaHare/orgs",
"received_events_url": "https://api.github.com/users/ShinoharaHare/received_events",
"repos_url": "https://api.github.com/users/ShinoharaHare/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShinoharaHare",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] | null |
[
"A duplicate of https://github.com/huggingface/datasets/issues/5929"
] |
2023-07-20T06:36:14Z
|
2023-07-21T15:19:37Z
|
2023-07-21T15:19:37Z
|
NONE
| null | null | null | null |
### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows it down.
Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times.
- without `import torch` 
- with `import torch` 
### Steps to reproduce the bug
Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon.
```python3
from datasets import load_from_disk, disable_caching
from transformers import AutoTokenizer
# import torch
# import lightning
def rearrange_datapoints(
batch,
tokenizer,
sequence_length,
):
datapoints = []
input_ids = []
for x in batch['input_ids']:
input_ids += x
while len(input_ids) >= sequence_length:
datapoint = input_ids[:sequence_length]
datapoints.append(datapoint)
input_ids[:sequence_length] = []
if input_ids:
paddings = [-1] * (sequence_length - len(input_ids))
datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings
datapoints.append(datapoint)
batch['input_ids'] = datapoints
return batch
if __name__ == '__main__':
disable_caching()
tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False)
dataset = load_from_disk('...')
dataset = dataset.map(
rearrange_datapoints,
fn_kwargs=dict(
tokenizer=tokenizer,
sequence_length=2048,
),
batched=True,
num_proc=8,
)
```
### Expected behavior
The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4",
"events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}",
"followers_url": "https://api.github.com/users/ShinoharaHare/followers",
"following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}",
"gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShinoharaHare",
"id": 47121592,
"login": "ShinoharaHare",
"node_id": "MDQ6VXNlcjQ3MTIxNTky",
"organizations_url": "https://api.github.com/users/ShinoharaHare/orgs",
"received_events_url": "https://api.github.com/users/ShinoharaHare/received_events",
"repos_url": "https://api.github.com/users/ShinoharaHare/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShinoharaHare",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6054/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6053
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6053/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6053/events
|
https://github.com/huggingface/datasets/issues/6053
| 1,812,635,902
|
I_kwDODunzps5sCqD-
| 6,053
|
Change package name from "datasets" to something less generic
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"events_url": "https://api.github.com/users/jack-jjm/events{/privacy}",
"followers_url": "https://api.github.com/users/jack-jjm/followers",
"following_url": "https://api.github.com/users/jack-jjm/following{/other_user}",
"gists_url": "https://api.github.com/users/jack-jjm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jack-jjm",
"id": 2124157,
"login": "jack-jjm",
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"organizations_url": "https://api.github.com/users/jack-jjm/orgs",
"received_events_url": "https://api.github.com/users/jack-jjm/received_events",
"repos_url": "https://api.github.com/users/jack-jjm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jack-jjm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jack-jjm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jack-jjm",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"This would break a lot of existing code, so we can't really do this.",
"I encountered this issue while working on a large project with 6+ years history. We have a submodule named datasets in the backend, and face a big challenge incorporating huggingface datasets into the project, especially considering django app renaming and other issues.\r\nIt would be nice if the authors at least provide a recipe on how to avoid name conflict in this situation."
] |
2023-07-19T19:53:28Z
|
2024-11-20T21:22:36Z
|
2023-10-03T16:04:09Z
|
NONE
| null | null | null | null |
### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6053/timeline
| null |
not_planned
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6051
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6051/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6051/events
|
https://github.com/huggingface/datasets/issues/6051
| 1,811,549,650
|
I_kwDODunzps5r-g3S
| 6,051
|
Skipping shard in the remote repo and resume upload
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4",
"events_url": "https://api.github.com/users/rs9000/events{/privacy}",
"followers_url": "https://api.github.com/users/rs9000/followers",
"following_url": "https://api.github.com/users/rs9000/following{/other_user}",
"gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rs9000",
"id": 9029817,
"login": "rs9000",
"node_id": "MDQ6VXNlcjkwMjk4MTc=",
"organizations_url": "https://api.github.com/users/rs9000/orgs",
"received_events_url": "https://api.github.com/users/rs9000/received_events",
"repos_url": "https://api.github.com/users/rs9000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rs9000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rs9000",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before `push_to_hub` in a faster manner and cache it to disk:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\n# load_dataset(...)\r\nformat = dataset.format\r\ndataset = dataset.with_format(\"arrow\")\r\ndataset = dataset.map(embed_table_storage, batched=True)\r\ndataset = dataset.with_format(**format)\r\n# push_to_hub(...)\r\n```\r\n\r\n(In Datasets 3.0, these external bytes will be written to an Arrow file when generating a dataset to avoid this \"embed\" step)",
"Hi, thanks, this solution saves some time.\r\nBut can't we avoid embedding all external image files bytes with each push, skipping the images that have already been pushed into the repo?\r\n\r\nEdit: Ok I missed the part of cache it manually on the disk the first time, this solves the problem. Thank you"
] |
2023-07-19T09:25:26Z
|
2023-07-20T18:16:01Z
|
2023-07-20T18:16:00Z
|
NONE
| null | null | null | null |
### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4",
"events_url": "https://api.github.com/users/rs9000/events{/privacy}",
"followers_url": "https://api.github.com/users/rs9000/followers",
"following_url": "https://api.github.com/users/rs9000/following{/other_user}",
"gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rs9000",
"id": 9029817,
"login": "rs9000",
"node_id": "MDQ6VXNlcjkwMjk4MTc=",
"organizations_url": "https://api.github.com/users/rs9000/orgs",
"received_events_url": "https://api.github.com/users/rs9000/received_events",
"repos_url": "https://api.github.com/users/rs9000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rs9000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rs9000",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6051/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6048
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6048/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6048/events
|
https://github.com/huggingface/datasets/issues/6048
| 1,809,629,346
|
I_kwDODunzps5r3MCi
| 6,048
|
when i use datasets.load_dataset, i encounter the http connect error!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/137855591?v=4",
"events_url": "https://api.github.com/users/yangy1992/events{/privacy}",
"followers_url": "https://api.github.com/users/yangy1992/followers",
"following_url": "https://api.github.com/users/yangy1992/following{/other_user}",
"gists_url": "https://api.github.com/users/yangy1992/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangy1992",
"id": 137855591,
"login": "yangy1992",
"node_id": "U_kgDOCDeCZw",
"organizations_url": "https://api.github.com/users/yangy1992/orgs",
"received_events_url": "https://api.github.com/users/yangy1992/received_events",
"repos_url": "https://api.github.com/users/yangy1992/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangy1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangy1992/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangy1992",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The `audiofolder` loader is not available in version `2.3.2`, hence the error. Please run the `pip install -U datasets` command to update the `datasets` installation to make `load_dataset(\"audiofolder\", ...)` work."
] |
2023-07-18T10:16:34Z
|
2023-07-18T16:18:39Z
|
2023-07-18T16:18:39Z
|
NONE
| null | null | null | null |
### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6048/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6046
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6046/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6046/events
|
https://github.com/huggingface/datasets/issues/6046
| 1,808,154,414
|
I_kwDODunzps5rxj8u
| 6,046
|
Support proxy and user-agent in fsspec calls
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/95092167?v=4",
"events_url": "https://api.github.com/users/zutarich/events{/privacy}",
"followers_url": "https://api.github.com/users/zutarich/followers",
"following_url": "https://api.github.com/users/zutarich/following{/other_user}",
"gists_url": "https://api.github.com/users/zutarich/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zutarich",
"id": 95092167,
"login": "zutarich",
"node_id": "U_kgDOBar9xw",
"organizations_url": "https://api.github.com/users/zutarich/orgs",
"received_events_url": "https://api.github.com/users/zutarich/received_events",
"repos_url": "https://api.github.com/users/zutarich/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zutarich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zutarich/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zutarich",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/95092167?v=4",
"events_url": "https://api.github.com/users/zutarich/events{/privacy}",
"followers_url": "https://api.github.com/users/zutarich/followers",
"following_url": "https://api.github.com/users/zutarich/following{/other_user}",
"gists_url": "https://api.github.com/users/zutarich/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zutarich",
"id": 95092167,
"login": "zutarich",
"node_id": "U_kgDOBar9xw",
"organizations_url": "https://api.github.com/users/zutarich/orgs",
"received_events_url": "https://api.github.com/users/zutarich/received_events",
"repos_url": "https://api.github.com/users/zutarich/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zutarich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zutarich/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zutarich",
"user_view_type": "public"
}
] | null |
[
"hii @lhoestq can you assign this issue to me?\r\n",
"You can reply \"#self-assign\" to this issue to automatically get assigned to it :)\r\nLet me know if you have any questions or if I can help",
"#2289 ",
"Actually i am quite new to figure it out how everything goes and done \r\n\r\n> You can reply \"#self-assign\" to this issue to automatically get assigned to it :)\r\n> Let me know if you have any questions or if I can help\r\n\r\nwhen i wrote #self-assign it automatically got converted to some number is it correct or i have done it some wrong way, I am quite new to open source thus wanna try to learn and explore it",
"#2289 #self-assign ",
"Ah yea github tries to replace the #self-assign with an issue link. I guess you can try to copy-paste instead to see if it works\r\n\r\nAnyway let me assign you manually",
"thanks a lot @lhoestq ! though i have a very lil idea of the issue, i am new. as i said before, but gonna try my best shot to do it.\r\ncan you please suggest some tips or anything from your side, how basically we approach it will be really helpfull.\r\nWill try my best!",
"The HfFileSystem from the `huggingface_hub` package can already read the HTTP_PROXY and HTTPS_PROXY environment variables. So the remaining thing missing is the `user_agent` that the user may include in a `DownloadConfig` object. The user agent can be used for regular http calls but also calls to the HfFileSystem.\r\n\r\n- for http, the `user_agent` isn't passed from `DownloadConfig` to `get_datasets_user_agent` in `_prepare_single_hop_path_and_storage_options` in `streaming_download_manager.py` so we need to include it\r\n- for HfFileSystem I think it requires a PR in https://github.com/huggingface/huggingface_hub to include it in the `HfFileSystem.__init__`",
"Hi @lhoestq ๐๐ผ\n\nIs anyone currently working on this? If not, I'd like to pick it up.\n\nAs I understand it:\n- The `user_agent` from `DownloadConfig` isn't currently passed to `get_datasets_user_agent()` inside `_prepare_single_hop_path_and_storage_options`.\n- I'll update that function to include the correct `user-agent` in the `headers`.\n- For full support, we may also need a change in `huggingface_hub` to let `HfFileSystem` accept custom headers.\n\nPlease let me know if this approach sounds good or if youโd prefer it handled differently ๐\n",
"Hi @lhoestq ๐๐ผ Just following up on this one!\n\nIโve opened [PR #7631](https://github.com/huggingface/datasets/pull/7631) to pass the user_agent from DownloadConfig into the fsspec storage options via _prepare_single_hop_path_and_storage_options (for HTTP/S).\n\nFor full support in HfFileSystem, I understand we might need a corresponding PR in huggingface_hub to add user_agent handling in its __init__. If you're okay with that direction, I can also raise a PR there!\n\nLet me know if any adjustments are needed ๐"
] |
2023-07-17T16:39:26Z
|
2025-06-26T18:26:27Z
| null |
MEMBER
| null | null | null | null |
Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem` could support passing at least the proxies
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6046/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6043
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6043/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6043/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6043/events
|
https://github.com/huggingface/datasets/issues/6043
| 1,807,771,750
|
I_kwDODunzps5rwGhm
| 6,043
|
Compression kwargs have no effect when saving datasets as csv
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hello @exs-avianello, I have reproduced the bug successfully and have understood the problem. But I am confused regarding this part of the statement, \"`pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`\".\r\n\r\nCan you please elaborate on it?\r\n\r\nThanks!",
"Hi @aryanxk02 ! Sure, what I actually meant is that when passing a path-like `path_or_buf` here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/arrow_dataset.py#L4708-L4714 \r\n\r\nit gets converted to a file object behind the scenes here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L92-L94\r\n\r\nand the eventual pandas `.to_csv()` calls that write to it always get `path_or_buf=None`, making pandas ignore the `compression` kwarg in the `to_csv_kwargs`\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L107-L109",
"@exs-avianello When `path_or_buf` is set to None, the `to_csv()` method will return the CSV data as a string instead of saving it to a file. Hence the compression doesn't take place. I think setting `path_or_buf=self.path_or_buf` should work. What you say?"
] |
2023-07-17T13:19:21Z
|
2023-07-22T17:34:18Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix.
### Steps to reproduce the bug
```python
# dataset is not compressed (but at least a warning is emitted)
import datasets
dataset = datasets.load_dataset("rotten_tomatoes", split="train")
dataset.to_csv("uncompressed.csv")
print(os.path.getsize("uncompressed.csv")) # 1008607
dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1})
print(os.path.getsize("compressed.csv.gz")) # 1008607
```
```shell
>>>
RuntimeWarning: compression has no effect when passing a non-binary object as input.
csv_str = batch.to_pandas().to_csv(
```
```python
# dataset is not compressed and no warnings are emitted
dataset.to_csv("compressed.csv.gz")
print(os.path.getsize("compressed.csv.gz")) # 1008607
# compare with
dataset.to_pandas().to_csv("pandas.csv.gz")
print(os.path.getsize("pandas.csv.gz")) # 418561
```
---
I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg.
### Expected behavior
The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'`
### Environment info
`datasets == 2.13.1`
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6043/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6043/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6039
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6039/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6039/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6039/events
|
https://github.com/huggingface/datasets/issues/6039
| 1,806,508,451
|
I_kwDODunzps5rrSGj
| 6,039
|
Loading column subset from parquet file produces error since version 2.13
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kklemon",
"id": 1430243,
"login": "kklemon",
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"repos_url": "https://api.github.com/users/kklemon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kklemon",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] |
2023-07-16T09:13:07Z
|
2023-07-24T14:35:04Z
|
2023-07-24T14:35:04Z
|
NONE
| null | null | null | null |
### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
raise ValueError(
ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}'
```
This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset.
### Steps to reproduce the bug
```python
# Prepare some sample data
import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
iris.to_parquet('iris.parquet')
# ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
print(iris.columns)
# Load data with datasets
from datasets import load_dataset
# Load full parquet file
dataset = load_dataset('parquet', data_files='iris.parquet')
# Load column subset; throws error for datasets>=2.13
dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length'])
```
### Expected behavior
No error should be thrown and the given column subset should be loaded.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6039/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6039/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6038/events
|
https://github.com/huggingface/datasets/issues/6038
| 1,805,960,244
|
I_kwDODunzps5rpMQ0
| 6,038
|
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53547009?v=4",
"events_url": "https://api.github.com/users/BaiMeiyingxue/events{/privacy}",
"followers_url": "https://api.github.com/users/BaiMeiyingxue/followers",
"following_url": "https://api.github.com/users/BaiMeiyingxue/following{/other_user}",
"gists_url": "https://api.github.com/users/BaiMeiyingxue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BaiMeiyingxue",
"id": 53547009,
"login": "BaiMeiyingxue",
"node_id": "MDQ6VXNlcjUzNTQ3MDA5",
"organizations_url": "https://api.github.com/users/BaiMeiyingxue/orgs",
"received_events_url": "https://api.github.com/users/BaiMeiyingxue/received_events",
"repos_url": "https://api.github.com/users/BaiMeiyingxue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BaiMeiyingxue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BaiMeiyingxue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BaiMeiyingxue",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Instead of writing the loading script, you can use the built-in loader to [load JSON files](https://huggingface.co/docs/datasets/loading#json):\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"json\", data_files={\"train\": os.path.join(data_dir[\"train\"]), \"dev\": os.path.join(data_dir[\"dev\"])})\r\n```"
] |
2023-07-15T07:58:08Z
|
2023-07-24T11:54:15Z
|
2023-07-24T11:54:15Z
|
NONE
| null | null | null | null |
Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
# urls = _URLS[self.config.name]
data_dir = dl_manager.download_and_extract(_URLs)
print(data_dir)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["train"]),
"split": "train",
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["dev"]),
"split": "dev",
},
),
]
```
and error occured
```
Traceback (most recent call last):
File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module>
dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare
if str(split_generator.split_info.name).lower() == "all":
AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
```
Could you help me?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6038/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6038/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6037
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6037/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6037/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6037/events
|
https://github.com/huggingface/datasets/issues/6037
| 1,805,887,184
|
I_kwDODunzps5ro6bQ
| 6,037
|
Documentation links to examples are broken
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"These docs are outdated (version 1.2.1 is over two years old). Please refer to [this](https://huggingface.co/docs/datasets/dataset_script) version instead.\r\n\r\nInitially, we hosted datasets in this repo, but now you can find them [on the HF Hub](https://huggingface.co/datasets) (e.g. the [`ag_news`](https://huggingface.co/datasets/ag_news/blob/main/ag_news.py) script)",
"Sorry I thought I'd selected the latest version."
] |
2023-07-15T04:54:50Z
|
2023-07-17T22:35:14Z
|
2023-07-17T15:10:32Z
|
NONE
| null | null | null | null |
### Describe the bug
The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example
- text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data are in csv files)
### Steps to reproduce the bug
Click on links to examples from latest documentation
### Expected behavior
Links should be up to date - it might be more stable to link to https://huggingface.co/datasets/ag_news/blob/main/ag_news.py
### Environment info
dataset v1.2.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6037/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6037/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6034
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6034/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6034/events
|
https://github.com/huggingface/datasets/issues/6034
| 1,804,501,361
|
I_kwDODunzps5rjoFx
| 6,034
|
load_dataset hangs on WSL
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4",
"events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}",
"followers_url": "https://api.github.com/users/Andy-Zhou2/followers",
"following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}",
"gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Andy-Zhou2",
"id": 20140522,
"login": "Andy-Zhou2",
"node_id": "MDQ6VXNlcjIwMTQwNTIy",
"organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs",
"received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events",
"repos_url": "https://api.github.com/users/Andy-Zhou2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Andy-Zhou2",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Even if a dataset is cached, we still make requests to check whether the cache is up-to-date. [This](https://huggingface.co/docs/datasets/v2.13.1/en/loading#offline) section in the docs explains how to avoid them and directly load the cached version.",
"Thanks - that works! However it doesn't resolve the original issue (but I am not sure if it is a WSL problem)",
"We use `requests` to make HTTP requests (and `aiohttp` in the streaming mode), so I don't think we can provide much help regarding the socket issue (it probably has something to do with WSL). "
] |
2023-07-14T09:03:10Z
|
2023-07-14T14:48:29Z
|
2023-07-14T14:48:29Z
|
NONE
| null | null | null | null |
### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8))
It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second).
### Steps to reproduce the bug
I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
>>> import datasets
>>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes
### Expected behavior
cache quickly recognized and loaded within a second
### Environment info
Please let me know if I should provide more environment information.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4",
"events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}",
"followers_url": "https://api.github.com/users/Andy-Zhou2/followers",
"following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}",
"gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Andy-Zhou2",
"id": 20140522,
"login": "Andy-Zhou2",
"node_id": "MDQ6VXNlcjIwMTQwNTIy",
"organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs",
"received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events",
"repos_url": "https://api.github.com/users/Andy-Zhou2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Andy-Zhou2",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6034/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6033
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6033/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6033/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6033/events
|
https://github.com/huggingface/datasets/issues/6033
| 1,804,482,051
|
I_kwDODunzps5rjjYD
| 6,033
|
`map` function doesn't fully utilize `input_columns`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] |
2023-07-14T08:49:28Z
|
2023-07-14T09:16:04Z
|
2023-07-14T09:16:04Z
|
NONE
| null | null | null | null |
### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6033/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6033/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6032
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6032/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6032/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6032/events
|
https://github.com/huggingface/datasets/issues/6032
| 1,804,358,679
|
I_kwDODunzps5rjFQX
| 6,032
|
DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"`HfApi` comes from the `huggingface_hub` package. You can use [this](https://huggingface.co/docs/huggingface_hub/v0.16.3/en/package_reference/utilities#huggingface_hub.configure_http_backend) utility to change the `huggingface_hub`'s `Session` proxies (see the example).\r\n\r\nWe plan to implement https://github.com/huggingface/datasets/issues/5080 and make this behavior more consistent eventually.",
"> this\r\n\r\nThanks. I will try `huggingface_hub.configure_http_backend` to change session's config.",
"@mariosasko are you saying if I do the following:\r\n\r\n```\r\ndef backend_factory() -> requests.Session:\r\n session = requests.Session()\r\n session.proxies = {\r\n \"https\": \"127.0.0.1:8887\",\r\n \"http\": \"127.0.0.1:8887\",\r\n }\r\n session.verify = \"/etc/ssl/certs/ca-certificates.crt\"\r\n return session\r\n\r\n# Set it as the default session factory\r\nconfigure_http_backend(backend_factory=backend_factory)\r\n```\r\n\r\nwhich works nicely with transformer library:\r\n\r\n```\r\ndef download_gpt_2_model():\r\n tokenizer = GPT2Tokenizer.from_pretrained(\r\n \"gpt2\", force_download=True, resume_download=False\r\n )\r\n text = \"Replace me by any text you'd like.\"\r\n encoded_input = tokenizer(text, return_tensors=\"pt\")\r\n print(encoded_input)\r\n\r\n model = GPT2Model.from_pretrained(\r\n \"gpt2\", force_download=True, resume_download=False\r\n )\r\n output = model(**encoded_input)\r\n```\r\n\r\nshould work for datasets library as well ?\r\n\r\nIn my case if I just do:\r\n\r\n```\r\ndef download_sts12_sts_dataset():\r\n dataset = load_dataset(\r\n \"mteb/sts12-sts\",\r\n download_mode=\"force_redownload\",\r\n verification_mode=\"basic_checks\",\r\n revision=\"main\",\r\n )\r\n\r\n```\r\nI am getting:\r\n`ConnectionError: Couldn't reach https://huggingface.co/datasets/mteb/sts12-sts/resolve/main/dataset_infos.json (ConnectTimeout(MaxRetryError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /datasets/mteb/sts12-sts/resolve/main/dataset_infos.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f429e87a3a0>, 'Connection to huggingface.co timed out. (connect timeout=100)'))\")))`\r\n\r\nwhich is typical when the proxy server is not defined. Looks like what is set in configure_http_backend(backend_factory=backend_factory) is ignore.\r\n\r\nIf I use env variable instead, it is working \r\n```\r\ndef download_sts12_sts_dataset():\r\n\r\n os.environ[\"https_proxy\"] = \"127.0.0.1:8887\"\r\n os.environ[\"http_proxy\"] = \"127.0.0.1:8887\"\r\n os.environ[\"REQUESTS_CA_BUNDLE\"] = \"/etc/ssl/certs/ca-certificates.crt\"\r\n\r\n dataset = load_dataset(\r\n \"mteb/sts12-sts\",\r\n download_mode=\"force_redownload\",\r\n verification_mode=\"basic_checks\",\r\n revision=\"main\",\r\n )\r\n```\r\n\r\nShould I add something ?\r\n\r\nI am using `huggingface_hub 0.15.1`, `datasets 2.13.0`, `transformers 4.30.2`",
"`huggingface_hub.configure_http_backend` works for `transformers` because they only use the `huggingface_hub` lib for downloads. Our download logic is a bit more complex (e.g., we also support downloading non-Hub files), so we are not aligned with them yet. In the meantime, it's best to use the env vars.",
"@mariosasko I fully understand that the logic for dataset is different. I see 2 issues with the current implementation of the env variables:\r\n\r\n- having the same https_proxy/http_prox/no_proxy env variables for all tools is not good in some case. For example I have 2 differents proxy server. In 2019 we had discussion with the Tensorflow teams and they recommended to do the following: TFDS_HTTP_PROXY, TFDS_HTTPS_PROXY ...\r\n- with recent version of requests, it is not possible to deactivate TLS interception (verify=false) by using env variable. This is useful to debug things and in some case TLS is not working and you need to ignore verifying the SSL certificate (probably not recommended) \r\n\r\nOne of the best way is to able to pass our requests.Session() directly\r\n```\r\nimport openai\r\nsession = requests.Session()\r\nsession.cert = CERT\r\nsession.verify = False\r\nopenai.requestssession = session\r\n```\r\n\r\nMy 2 cents in this discussion"
] |
2023-07-14T07:22:55Z
|
2023-09-11T13:50:41Z
| null |
NONE
| null | null | null | null |
### Describe the bug
```python
download_config = DownloadConfig(proxies={'https': '<my proxy>'})
builder = load_dataset_builder(..., download_config=download_config)
```
But, when getting the dataset_info from HfApi, the http requests not using the proxies.
### Steps to reproduce the bug
1. Setup proxies in DownloadConfig.
2. Call `load_dataset_build` with download_config.
3. Inspect the call stack in HfApi.dataset_info.

### Expected behavior
DownloadConfig.proxies works for getting dataset_info.
### Environment info
https://github.com/huggingface/datasets/commit/406b2212263c0d33f267e35b917f410ff6b3bc00
Python 3.11.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6032/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6032/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6031
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6031/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6031/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6031/events
|
https://github.com/huggingface/datasets/issues/6031
| 1,804,183,858
|
I_kwDODunzps5riaky
| 6,031
|
Argument type for map function changes when using `input_columns` for `IterableDataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Yes, this is intended."
] |
2023-07-14T05:11:14Z
|
2023-07-14T14:44:15Z
|
2023-07-14T14:44:15Z
|
NONE
| null | null | null | null |
### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and values in `examples` so I added `input_columns` argument to `map` function to select keys and values.
It gives me an error saying
```
TypeError: tokenize() takes 1 positional argument but 3 were given.
```
The code below matters.
https://github.com/huggingface/datasets/blob/406b2212263c0d33f267e35b917f410ff6b3bc00/src/datasets/iterable_dataset.py#L687
For example, `inputs = {"a":1, "b":2, "c":3}`.
If `self.input_coluns` is `None`,
`inputs` is a dictionary type variable and `function_args` becomes a `list` of a single `dict` variable.
`function_args` becomes `[{"a":1, "b":2, "c":3}]`
Otherwise, lets say `self.input_columns = ["a", "c"]`
`[inputs[col] for col in self.input_columns]` results in `[1, 3]`.
I think it should be `[{"a":1, "c":3}]`.
I want to ask if the resulting format is intended.
Maybe I can modify `tokenize()` to have 2 parameters in this case instead of having 1 dictionary.
But this is confusing to me.
Or it should be fixed as `[{col:inputs[col] for col in self.input_columns}]`
### Steps to reproduce the bug
Run `map` function of `IterableDataset` with `input_columns` argument.
### Expected behavior
`function_args` looks better to have same format.
I think it should be `[{"a":1, "c":3}]`.
### Environment info
dataset version: 2.12
python: 3.8
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6031/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6031/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6025
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6025/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6025/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6025/events
|
https://github.com/huggingface/datasets/issues/6025
| 1,801,852,601
|
I_kwDODunzps5rZha5
| 6,025
|
Using a dataset for a use other than it was intended for.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/surya-narayanan",
"id": 17240858,
"login": "surya-narayanan",
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/surya-narayanan",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` "
] |
2023-07-12T22:33:17Z
|
2023-07-13T13:57:36Z
|
2023-07-13T13:57:36Z
|
NONE
| null | null | null | null |
### Describe the bug
Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason?
Here is the full stacktrace
```
File "/home/suryahari/Vornoi/tryage-handoff-other-datasets.py", line 276, in create_dataloaders
dataset = interleave_datasets(dsfold, stopping_strategy="all_exhausted")
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py", line 134, in interleave_datasets
return _interleave_iterable_datasets(
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1833, in _interleave_iterable_datasets
info = DatasetInfo.from_merge([d.info for d in datasets])
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in from_merge
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in <listcomp>
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 378, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 208, in __post_init__
self.task_templates = [
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 209, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/tasks/text_classification.py", line 20, in align_with_features
raise ValueError(f"Column {self.label_column} is not present in features.")
ValueError: Column label is not present in features.
```
### Steps to reproduce the bug
Delete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets.
### Expected behavior
Should let me use the dataset with just the `text` field
### Environment info
latest datasets library? I don't think this was an issue in earlier versions.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6025/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6025/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6022/events
|
https://github.com/huggingface/datasets/issues/6022
| 1,800,092,589
|
I_kwDODunzps5rSzut
| 6,022
|
Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting! I've opened a PR with a fix."
] |
2023-07-12T03:20:17Z
|
2023-07-12T16:18:06Z
|
2023-07-12T16:18:05Z
|
NONE
| null | null | null | null |
### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1328, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3483, in _map_single
writer.write_batch(batch)
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_writer.py", line 549, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 2063, in cast_array_to_feature
return feature.cast_storage(array)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/features/features.py", line 1098, in cast_storage
if min_max["max"] >= self.num_classes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/t1.py", line 33, in <module>
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 850, in map
{
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 851, in <dictcomp>
k: dataset.map(
^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 577, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 542, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3179, in map
for rank, done, content in iflatmap_unordered(
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
```
### Steps to reproduce the bug
1. Checkout the latest main of datasets.
2. Run the code:
```python
from datasets import load_dataset
def transforms(examples):
# examples["pixel_values"] = [image.convert("RGB").resize((100, 100)) for image in examples["image"]]
return examples
ds = load_dataset("scene_parse_150")
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
print(ds)
```
### Expected behavior
map without exception.
### Environment info
Datasets: https://github.com/huggingface/datasets/commit/b8067c0262073891180869f700ebef5ac3dc5cce
Python: 3.11.4
System: Macos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6022/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6022/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6020
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6020/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6020/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6020/events
|
https://github.com/huggingface/datasets/issues/6020
| 1,799,720,536
|
I_kwDODunzps5rRY5Y
| 6,020
|
Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4",
"events_url": "https://api.github.com/users/kheyer/events{/privacy}",
"followers_url": "https://api.github.com/users/kheyer/followers",
"following_url": "https://api.github.com/users/kheyer/following{/other_user}",
"gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kheyer",
"id": 38166299,
"login": "kheyer",
"node_id": "MDQ6VXNlcjM4MTY2Mjk5",
"organizations_url": "https://api.github.com/users/kheyer/orgs",
"received_events_url": "https://api.github.com/users/kheyer/received_events",
"repos_url": "https://api.github.com/users/kheyer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kheyer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kheyer",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This scenario currently requires explicitly passing the target features (to avoid the error): \r\n```python\r\nimport datasets\r\n\r\n...\r\n\r\nfeatures = dataset.features\r\nfeatures[\"output\"] = = [{\"test\": datasets.Value(\"int64\")}]\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32, features=features)\r\n```",
"I just encountered the same error in the same situation (multiprocessing with variable length outputs).\r\n\r\nThe funny (or dangerous?) thing is, that this error only showed up when testing with a small test dataset (16 examples, ValueError with `num_proc` >1) but the same code works fine for the full dataset (~70k examples).\r\n\r\n@mariosasko Any idea on how to do that with a nested feature with lists of variable lengths containing dicts?\r\n\r\nEDIT: Was able to narrow it down: >200 Examples: no error, <150 Examples: Error. \r\nNow idea what to make of this but pretty obvious that this is a bug....",
"This error also occurs while concatenating the datasets.",
"I'm running into the same error, is there any working workaround for this that doesnt involve using a larger subset or reducing the number of workers? I couldn't get the `features` set mentioned above to work..."
] |
2023-07-11T20:40:38Z
|
2024-10-27T06:30:13Z
| null |
NONE
| null | null | null | null |
### Describe the bug
I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes/shards used.
I've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)])
def test_func(row, idx):
if idx==58:
return {'output': []}
else:
return {'output' : [{'test':1}, {'test':2}]}
# this works fine
test1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4)
# this fails
test2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32)
>ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value("null").
```
The error occurs during the check
```python
_check_if_features_can_be_aligned([dset.features for dset in dsets])
```
When the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error.
### Expected behavior
Expected behavior is the result would be the same regardless of the `num_proc` value used.
### Environment info
Datasets version 2.11.0
Python 3.9.16
| null |
{
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6020/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6020/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6017
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6017/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6017/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6017/events
|
https://github.com/huggingface/datasets/issues/6017
| 1,799,309,132
|
I_kwDODunzps5rP0dM
| 6,017
|
Switch to huggingface_hub's HfFileSystem
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[] |
2023-07-11T16:24:40Z
|
2023-07-17T17:01:01Z
|
2023-07-17T17:01:01Z
|
MEMBER
| null | null | null | null |
instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6017/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6017/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6014
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6014/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6014/events
|
https://github.com/huggingface/datasets/issues/6014
| 1,798,213,816
|
I_kwDODunzps5rLpC4
| 6,014
|
Request to Share/Update Dataset Viewer Code
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/105081034?v=4",
"events_url": "https://api.github.com/users/lilyorlilypad/events{/privacy}",
"followers_url": "https://api.github.com/users/lilyorlilypad/followers",
"following_url": "https://api.github.com/users/lilyorlilypad/following{/other_user}",
"gists_url": "https://api.github.com/users/lilyorlilypad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lilyorlilypad",
"id": 105081034,
"login": "lilyorlilypad",
"node_id": "U_kgDOBkNoyg",
"organizations_url": "https://api.github.com/users/lilyorlilypad/orgs",
"received_events_url": "https://api.github.com/users/lilyorlilypad/received_events",
"repos_url": "https://api.github.com/users/lilyorlilypad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lilyorlilypad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lilyorlilypad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lilyorlilypad",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?",
"I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/datasets-viewer/blob/8efad8eae313a891f713469983bf4c744786f26e/run.py#L126-L131\r\n* https://github.com/huggingface/datasets-viewer/blob/8efad8eae313a891f713469983bf4c744786f26e/run.py#L145-L150\r\n\r\nTo make the viewer work, the first one should be replaced with the following:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nconfs = builder_cls.BUILDER_CONFIGS\r\n```\r\nAnd the second one:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nif conf:\r\n builder_instance = builder_cls(name=conf, cache_dir=path if path_to_datasets is not None else None)\r\nelse:\r\n builder_instance = builder_cls(cache_dir=path if path_to_datasets is not None else None)\r\n```\r\n\r\nBut as @lhoestq suggested, it's better to use the `datasets-server` API nowadays to [fetch the rows](https://huggingface.co/docs/datasets-server/rows).",
"> The dataset viewer on the Hugging Face website is incredibly useful\r\n\r\n@mariosasko i think @lilyorlilypad wants to run the new dataset-viewer, not the old one",
"> wants to run the new dataset-viewer, not the old one\r\n\r\nThanks for the clarification for me. I do want to run the new dataset-viewer. ",
"It should be possible to run it locally using the HF datasets-server API (docs [here](https://huggingface.co/docs/datasets-server)) but the front end part is not open source (yet ?)\r\n\r\nThe back-end is open source though if you're interested: https://github.com/huggingface/datasets-server\r\nIt automatically converts datasets on HF to Parquet, which is the format we use to power the viewer.",
"the new frontend would probably be hard to open source, as is, as it's quite intertwined with the Hub's code.\r\n\r\nHowever, at some point it would be amazing to have a community-driven open source implementation of a frontend to datasets-server! ",
"For the frontend viewer, see https://github.com/huggingface/datasets/issues/6139.\r\n\r\nAlso mentioned in https://github.com/huggingface/datasets-server/issues/213 and https://github.com/huggingface/datasets-server/issues/441\r\n\r\nClosing as a duplicate of https://github.com/huggingface/datasets/issues/6139",
"Hi team,\r\n\r\nI'm currently researching the Dataset Viewer project and would like to understand more about the frontend technologies used. Specifically, I'm interested in knowing:\r\n\r\nWhich frontend framework is being utilized (e.g., React, Vue, etc.)?\r\nAre there any specific libraries or components being used for UI (e.g., Material-UI, Ant Design)?\r\nAny other notable frontend tools or technologies that are part of this project?\r\nYour assistance in providing these details would be greatly appreciated. Thank you for your time and effort!\r\n\r\nBest regards",
"@jacob-rodgers-max we use https://svelte.dev/",
"> @jacob-rodgers-max we use https://svelte.dev/\r\n\r\nThank you very much for your prompt and detailed response!"
] |
2023-07-11T06:36:09Z
|
2024-07-20T07:29:08Z
|
2023-09-25T12:01:17Z
|
NONE
| null | null | null | null |
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code.
Thank you for considering this request, and I look forward to your response.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6014/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6013
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6013/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6013/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6013/events
|
https://github.com/huggingface/datasets/issues/6013
| 1,796,083,437
|
I_kwDODunzps5rDg7t
| 6,013
|
[FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] |
open
| false
| null |
[] | null |
[
"You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_columns=ds.column_names)\r\nds_combined = concatenate_datasets([ds, ds_new], axis=1)\r\n```\r\n\r\nDoing this automatically is hard to implement efficiently unless we know ahead of time which existing columns will be modified by a `map` transform. We have this info when `input_columns` are specified, so I think this is the only case we can optimize.",
"Hi @mariosasko ๐ Iโd like to start working on this issue."
] |
2023-07-10T06:42:20Z
|
2025-06-19T06:30:38Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented.
### Your contribution
_
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6013/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6013/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6012
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6012/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6012/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6012/events
|
https://github.com/huggingface/datasets/issues/6012
| 1,795,575,432
|
I_kwDODunzps5rBk6I
| 6,012
|
[FR] Transform Chaining, Lazy Mapping
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"You can use `with_transform` to get a new dataset object.\r\n\r\nSupport for lazy `map` has already been discussed [here](https://github.com/huggingface/datasets/issues/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex. ",
"> You can use `with_transform` to get a new dataset object.\r\n> \r\n> Support for lazy `map` has already been discussed [here](https://github.com/huggingface/datasets/issues/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex.\r\n\r\nI read about IterableDataset, and it seems to have lazy mapping. But I can't figure out how to convert an IterableDataset into a normal one when needed.\r\n\r\n`with_transform` still does not chain AFAIU.",
"> I read about IterableDataset, and it seems to have lazy mapping. But I can't figure out how to convert an IterableDataset into a normal one when needed.\r\n\r\nYou must cache an `IterableDataset` to disk to load it as a `Dataset`. One way to do this is with `Dataset.from_generator`:\r\n```python\r\nfrom functools import partial\r\nfrom datasets import Dataset\r\n\r\ndef gen_from_iterable_dataset(iterable_ds)\r\n yield from iterable_ds\r\n\r\nds = Dataset.from_generator(partial(gen_from_iterable_dataset, iterable_ds), features=iterable_ds.features})\r\n```\r\n\r\n> with_transform still does not chain AFAIU.\r\n\r\nYes, not supported yet - the solution is to combine the transforms into a single one.",
"I wonder if it would be beneficial to have a dedicated method to do that ? Maybe a `.save_to_disk()` so that the user can reload the resulting dataset later ?",
"> ```python\r\n> from functools import partial\r\n> from datasets import Dataset\r\n> \r\n> def gen_from_iterable_dataset(iterable_ds)\r\n> yield from iterable_ds\r\n> \r\n> ds = Dataset.from_generator(partial(gen_from_iterable_dataset, iterable_ds), features=iterable_ds.features})\r\n> ```\r\n\r\n@mariosasko With these complex mapping functions, what hash will be used to cache this dataset?\r\n",
"The params passed to `Dataset.from_generator` will be used to compute the hash (`partial` encapsulates the `iterable_ds` value, so changing it will also change the hash)",
"Hi, I think this feature would be very useful. I want to concatenate large datasets with heterogeneous columns. I dislike `map` since I don't want multiple copy of that datasets locally. I tried to use \"set_transform\" on each dataset to convert it to a standard features format, but `datasets.concatenate_datasets` ignores the updated format of the datasets.ย A work around is to use `torch.utils.data.ConcatDataset`. Is there a neat way to do it using HF datasets?๏ปฟ",
"@mariosasko These features would be handy for large datasets. A typical use case is video datasets: We have millions of videos, each stored in some OSS so they require some custom loading logic.\n\n1) Due to the memory limit, loading the videos a priori into the memory is infeasible. But we can postpone video loading until they are needed with lazy mapping.\n2) With chained transforms, we can allow the users to specify their custom video preprocessing logic while keeping the loading logic the same.",
"FYI lazy map is available for `IterableDataset`(map is applied on-the-fly when iterating on the dataset):\n\n```python\nds = load_dataset(...streaming=True)\n# or\nds = Dataset.from_list(...).to_iterable_dataset()\n# or\nds = IterableDataset.from_generator(...)\n\n# Then you can chain many map/filter/shuffle/etc.\nds = ds.map(...).filter(...).map(...)\n\n# The map functions are applied on-the-fly when iterating on the dataset\nfor example in ds:\n ..."
] |
2023-07-09T21:40:21Z
|
2025-01-20T14:06:28Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
Currently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space.
The solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested.
The API should look like `map`, as `set_transform` changes the current dataset while `map` returns another dataset.
### Motivation
Lazy processing allows lower disk usage and faster experimentation.
### Your contribution
_
| null |
{
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6012/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6012/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6011
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6011/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6011/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6011/events
|
https://github.com/huggingface/datasets/issues/6011
| 1,795,296,568
|
I_kwDODunzps5rAg04
| 6,011
|
Documentation: wiki_dpr Dataset has no metric_type for Faiss Index
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29335344?v=4",
"events_url": "https://api.github.com/users/YichiRockyZhang/events{/privacy}",
"followers_url": "https://api.github.com/users/YichiRockyZhang/followers",
"following_url": "https://api.github.com/users/YichiRockyZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YichiRockyZhang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YichiRockyZhang",
"id": 29335344,
"login": "YichiRockyZhang",
"node_id": "MDQ6VXNlcjI5MzM1MzQ0",
"organizations_url": "https://api.github.com/users/YichiRockyZhang/orgs",
"received_events_url": "https://api.github.com/users/YichiRockyZhang/received_events",
"repos_url": "https://api.github.com/users/YichiRockyZhang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YichiRockyZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YichiRockyZhang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YichiRockyZhang",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! You can do `ds.get_index(\"embeddings\").faiss_index.metric_type` to get the metric type and then match the result with the FAISS metric [enum](https://github.com/facebookresearch/faiss/blob/43d86e30736ede853c384b24667fc3ab897d6ba9/faiss/MetricType.h#L22-L36) (should be L2).",
"Ah! Thank you for pointing this out. FYI: the enum indicates it's using the inner product. Using `torch.inner` or `torch.dot` still produces a discrepancy compared to the built-in score. I think this is because of the compression/quantization that occurs with the FAISS index."
] |
2023-07-09T08:30:19Z
|
2023-07-11T03:02:36Z
|
2023-07-11T03:02:36Z
|
NONE
| null | null | null | null |
### Describe the bug
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.
### Steps to reproduce the bug
System: Python 3.9.16, Transformers 4.30.2, WSL
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because I do not know how the `scores` are being computed for `get_nearest_examples()`.
```py
from transformers import DPRQuestionEncoder, DPRContextEncoder, DPRQuestionEncoderTokenizer, DPRContextEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
def encode_question(query, tokenizer=tokenizer, encoder=encoder):
inputs = tokenizer(query, return_tensors='pt')
question_embedding = encoder(**inputs)[0].detach().numpy()
return question_embedding
def get_knn(query, k=5, tokenizer=tokenizer, encoder=encoder, verbose=False):
enc_question = encode_question(query, tokenizer, encoder)
topk_results = ds.get_nearest_examples(index_name='embeddings',
query=enc_question,
k=k)
a = torch.tensor(enc_question[0]).reshape(768)
b = torch.tensor(topk_results.examples['embeddings'][0])
print(a.shape, b.shape)
print(torch.dot(a, b))
print((a-b).pow(2).sum())
return topk_results
```
The [FAISS documentation](https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances) suggests the metric is usually L2 distance (without the square root) or the inner product. I compute both for the sample query:
```py
query = """ it catapulted into popular culture along with a line of action figures and other toys by Bandai.[2] By 2001, the media franchise had generated over $6 billion in toy sales.
Despite initial criticism that its action violence targeted child audiences, the franchise has been commercially successful."""
get_knn(query,k=5)
```
Here, I get dot product of 80.6020 and L2 distance of 77.6616 and
```py
NearestExamplesResults(scores=array([76.20431 , 75.312416, 74.945404, 74.866394, 74.68506 ],
dtype=float32), examples={'id': ['3081096', '2004811', '8908258', '9594124', '286575'], 'text': ['actors, resulting in the "Power Rangers" franchise which has continued since then into sequel TV series (with "Power Rangers Beast Morphers" set to premiere in 2019), comic books, video games, and three feature films, with a further cinematic universe planned. Following from the success of "Power Rangers", Saban acquired the rights to more of Toei\'s library, creating "VR Troopers" and "Big Bad Beetleborgs" from several Metal Hero Series shows and "Masked Rider" from Kamen Rider Series footage. DIC Entertainment joined this boom by acquiring the rights to "Gridman the Hyper Agent" and turning it into "Superhuman Samurai Syber-Squad". In 2002,',
```
Doing `k=1` indicates the higher the outputted number, the better the match, so the metric should not be L2 distance. However, my manually computed inner product (80.6) has a discrepancy with the reported (76.2). Perhaps, this has to do with me using the `compressed` embeddings?
### Expected behavior
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # METRIC_INNER_PRODUCT
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.18.0-477.13.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/29335344?v=4",
"events_url": "https://api.github.com/users/YichiRockyZhang/events{/privacy}",
"followers_url": "https://api.github.com/users/YichiRockyZhang/followers",
"following_url": "https://api.github.com/users/YichiRockyZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YichiRockyZhang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/YichiRockyZhang",
"id": 29335344,
"login": "YichiRockyZhang",
"node_id": "MDQ6VXNlcjI5MzM1MzQ0",
"organizations_url": "https://api.github.com/users/YichiRockyZhang/orgs",
"received_events_url": "https://api.github.com/users/YichiRockyZhang/received_events",
"repos_url": "https://api.github.com/users/YichiRockyZhang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/YichiRockyZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YichiRockyZhang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/YichiRockyZhang",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6011/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6011/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6010
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6010/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6010/events
|
https://github.com/huggingface/datasets/issues/6010
| 1,793,838,152
|
I_kwDODunzps5q68xI
| 6,010
|
Improve `Dataset`'s string representation
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"I want to take a shot at this if possible ",
"Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`/`_repr_html_` implementations for some pointers/ideas.",
"@mariosasko are there any other similar issues that I could work on? I see this has been already solved. "
] |
2023-07-07T16:38:03Z
|
2023-09-01T03:45:07Z
| null |
COLLABORATOR
| null | null | null | null |
Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6010/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6008
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6008/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6008/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6008/events
|
https://github.com/huggingface/datasets/issues/6008
| 1,789,869,344
|
I_kwDODunzps5qrz0g
| 6,008
|
Dataset.from_generator consistently freezes at ~1000 rows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27695722?v=4",
"events_url": "https://api.github.com/users/andreemic/events{/privacy}",
"followers_url": "https://api.github.com/users/andreemic/followers",
"following_url": "https://api.github.com/users/andreemic/following{/other_user}",
"gists_url": "https://api.github.com/users/andreemic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andreemic",
"id": 27695722,
"login": "andreemic",
"node_id": "MDQ6VXNlcjI3Njk1NzIy",
"organizations_url": "https://api.github.com/users/andreemic/orgs",
"received_events_url": "https://api.github.com/users/andreemic/received_events",
"repos_url": "https://api.github.com/users/andreemic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andreemic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreemic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andreemic",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n\r\nOur support for multi-dim arrays could be better, and we plan to improve it as part of https://github.com/huggingface/datasets/issues/5272.",
"> By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Array3D(shape=(512,512,3), dtype=\"float32\")})` should be faster).\r\n> \r\n> Our support for multi-dim arrays could be better, and we plan to improve it as part of #5272.\r\n\r\nThanks for the explanation! The Image array was just for demonstration, I use PIL Images in practice. Does that make a difference? What's the best approach for a dataset with PIL Images as rows?",
"It's best to use the `datasets.Image()` feature type for PIL images (to save space) :)"
] |
2023-07-05T16:06:48Z
|
2023-07-10T13:46:39Z
|
2023-07-10T13:46:39Z
|
NONE
| null | null | null | null |
### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.
I've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.
Let me know if you have ideas how to resolve it!
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
def gen():
for row in range(10000):
yield {"i": np.random.rand(512, 512, 3)}
Dataset.from_generator(gen)
# -> 90% of the time gets stuck around 1000 rows
```
### Expected behavior
Should continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 12.0.1
- Pandas version: 1.5.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6008/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6008/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6007
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6007/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6007/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6007/events
|
https://github.com/huggingface/datasets/issues/6007
| 1,789,782,693
|
I_kwDODunzps5qreql
| 6,007
|
Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/silverriver",
"id": 2529049,
"login": "silverriver",
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"repos_url": "https://api.github.com/users/silverriver/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"type": "User",
"url": "https://api.github.com/users/silverriver",
"user_view_type": "public"
}
|
[
{
"color": "c2e0c6",
"default": false,
"description": "Related to Apache Arrow",
"id": 5705560427,
"name": "arrow",
"node_id": "LA_kwDODunzps8AAAABVBPxaw",
"url": "https://api.github.com/repos/huggingface/datasets/labels/arrow"
}
] |
open
| false
| null |
[] | null |
[
"This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier.",
"I am afraid int32 is not the reason for this error.\r\n\r\nI have submitted a commit to use int64 for all ints in the dataset:\r\nhttps://huggingface.co/datasets/liwu/MNBVC/commit/857ac00d9eab96a6708ad6a82bd9001686042a9e\r\n\r\nand I have updated my env to the latest datasets release:\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.13.1\r\n- Platform: macOS-13.2.1-arm64-arm-64bit\r\n- Python version: 3.11.2\r\n- Huggingface_hub version: 0.13.4\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 1.5.3\r\n\r\nBut the error still exist\r\n\r\n```\r\nDownloading and preparing dataset mnbvc/news_peoples_daily to /Users/silver/.cache/huggingface/datasets/liwu___mnbvc/news_peoples_daily/0.0.1/ee380f6309fe9b8b0d1fb14d77118f132444f22c8c4b28bf5c1645312688e051...\r\nDownloading data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 12/12 [00:00<00:00, 9070.40it/s]\r\nExtracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 12/12 [00:00<00:00, 2697.16it/s]\r\n---------------------------------------------------------------------------\r\nOverflowError Traceback (most recent call last)\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1647, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1646 example = self.info.features.encode_example(record) if self.info.features is not None else record\r\n-> 1647 writer.write(example, key)\r\n 1648 num_examples_progress_update += 1\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:490, in ArrowWriter.write(self, example, key, writer_batch_size)\r\n 488 self.hkey_record = []\r\n--> 490 self.write_examples_on_file()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOverflowError Traceback (most recent call last)\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1656, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1655 num_shards = shard_id + 1\r\n-> 1656 num_examples, num_bytes = writer.finalize()\r\n 1657 writer.close()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:584, in ArrowWriter.finalize(self, close_stream)\r\n 583 self.hkey_record = []\r\n--> 584 self.write_examples_on_file()\r\n 585 # If schema is known, infer features even if no examples were written\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:448, in ArrowWriter.write_examples_on_file(self)\r\n 444 batch_examples[col] = [\r\n 445 row[0][col].to_pylist()[0] if isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) else row[0][col]\r\n 446 for row in self.current_examples\r\n 447 ]\r\n--> 448 self.write_batch(batch_examples=batch_examples)\r\n 449 self.current_examples = []\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:553, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)\r\n 552 typed_sequence = OptimizedTypedSequence(col_values, type=col_type, try_type=col_try_type, col=col)\r\n--> 553 arrays.append(pa.array(typed_sequence))\r\n 554 inferred_features[col] = typed_sequence.get_inferred_type()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:236, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/arrow_writer.py:189, in TypedSequence.__arrow_array__(self, type)\r\n 188 trying_cast_to_python_objects = True\r\n--> 189 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n 190 # use smaller integer precisions if possible\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:320, in pyarrow.lib.array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/array.pxi:39, in pyarrow.lib._sequence_to_array()\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nOverflowError: Python int too large to convert to C long\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\nCell In[2], line 1\r\n----> 1 dataset = load_dataset(\"liwu/MNBVC\", 'news_peoples_daily', split='train')\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/load.py:1809, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1806 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1808 # Download and prepare data\r\n-> 1809 builder_instance.download_and_prepare(\r\n 1810 download_config=download_config,\r\n 1811 download_mode=download_mode,\r\n 1812 verification_mode=verification_mode,\r\n 1813 try_from_hf_gcs=try_from_hf_gcs,\r\n 1814 num_proc=num_proc,\r\n 1815 storage_options=storage_options,\r\n 1816 )\r\n 1818 # Build dataset for splits\r\n 1819 keep_in_memory = (\r\n 1820 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1821 )\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:909, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)\r\n 907 if num_proc is not None:\r\n 908 prepare_split_kwargs[\"num_proc\"] = num_proc\r\n--> 909 self._download_and_prepare(\r\n 910 dl_manager=dl_manager,\r\n 911 verification_mode=verification_mode,\r\n 912 **prepare_split_kwargs,\r\n 913 **download_and_prepare_kwargs,\r\n 914 )\r\n 915 # Sync info\r\n 916 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1670, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)\r\n 1669 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):\r\n-> 1670 super()._download_and_prepare(\r\n 1671 dl_manager,\r\n 1672 verification_mode,\r\n 1673 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS\r\n 1674 or verification_mode == VerificationMode.ALL_CHECKS,\r\n 1675 **prepare_splits_kwargs,\r\n 1676 )\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1004, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)\r\n 1000 split_dict.add(split_generator.split_info)\r\n 1002 try:\r\n 1003 # Prepare split will record examples associated to the split\r\n-> 1004 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 1005 except OSError as e:\r\n 1006 raise OSError(\r\n 1007 \"Cannot find data file. \"\r\n 1008 + (self.manual_download_instructions or \"\")\r\n 1009 + \"\\nOriginal error:\\n\"\r\n 1010 + str(e)\r\n 1011 ) from None\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1508, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)\r\n 1506 job_id = 0\r\n 1507 with pbar:\r\n-> 1508 for job_id, done, content in self._prepare_split_single(\r\n 1509 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args\r\n 1510 ):\r\n 1511 if done:\r\n 1512 result = content\r\n\r\nFile ~/git/venv/lib/python3.11/site-packages/datasets/builder.py:1665, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1663 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:\r\n 1664 e = e.__context__\r\n-> 1665 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n 1667 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\nBesides, it works fine when I am using streamed dataset.",
"`simhash` is the problematic column - it has values such as `18329103420363166823` that are out of the int64 range. You can fix this by setting the feature type to `Value(\"string\")` (it's advised to use this type for hash values in general)\r\n\r\n> Besides, it works fine when I am using streamed dataset.\r\n\r\nStreaming yields Python dictionaries from the script without converting them to the Arrow representation, as this conversion step is not that cheap performance-wise.",
"i am using uint64 for simhash\r\n\r\nuint64 ranges up to about 3.69E19.\r\n\r\n18329103420363166823 is less than this value.\r\n\r\nmoreover, our simhash algorithm use 64 bits. it should fit in uint64.\r\n\r\n\r\n\r\n",
"You are right. I overlooked the feature type.\r\n\r\nThis is a reproducer:\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.arrow_writer import TypedSequence\r\n\r\npa.array(TypedSequence([18329103420363166823], type=Value(\"uint64\")))\r\n```\r\n\r\n`pa.array([18329103420363166823])` also fails with the same error, so it seems PyArrow does not always infer the correct type as NumPy does (`uint64` in this case).\r\n\r\nI'll report this issue in the Arrow repo.\r\n\r\n`pa.array([18329103420363166823], pa.uint64)` works, so maybe we can implement a temporary fix (supporting complex input such as `[{\"image\": pil_image, \"num\": uint64_value}]` would be hard though).\r\n\r\nIn the meantime, you should be able to bypass this error by returning the `simhash` values as NumPy scalars in the script:\r\n```python\r\ndef _generate_examples(self, ...):\r\n ...\r\n yield {..., \"simhash\": np.uint64(simhash), ...}\r\n```",
"Thank you for checking this issue in detail.\r\n\r\nHowever, it seems that using `np.uint64(simhash)` does not work. The same issue still exists.\r\n\r\nhttps://huggingface.co/datasets/liwu/MNBVC/commit/1e44f1e400b7e61052647d44c99cdae3bae9c830\r\n\r\nAnyway, we decide to use string type for these simhash values. Hope pyarrow can fix their bug soon.",
"Arrow issue: https://github.com/apache/arrow/issues/36520",
"May be something read your training data line by line.\r\nThen your training data just only one line. \r\nIt is so large.\r\nI guess.\r\n"
] |
2023-07-05T15:16:50Z
|
2024-02-07T22:22:35Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When load a large dataset with the following code
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
```
We encountered the error: "OverflowError: Python int too large to convert to C long"
The error look something like:
```
OverflowError: Python int too large to convert to C long
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
<ipython-input-7-0ed8700e662d> in <module>
----> 1 dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', cache_dir='/sfs/MNBVC/.cache/')
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
-> 1751 use_auth_token=use_auth_token,
1752 )
1753
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
791 try:
792 # Prepare split will record examples associated to the split
--> 793 self._prepare_split(split_generator, **prepare_split_kwargs)
794 except OSError as e:
795 raise OSError(
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
1219 writer.write(example, key)
1220 finally:
-> 1221 num_examples, num_bytes = writer.finalize()
1222
1223 split_generator.split_info.num_examples = num_examples
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in finalize(self, close_stream)
536 # Re-intializing to empty list for next batch
537 self.hkey_record = []
--> 538 self.write_examples_on_file()
539 if self.pa_writer is None:
540 if self.schema:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
407 # Since current_examples contains (example, key) tuples
408 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 409 self.write_batch(batch_examples=batch_examples)
410 self.current_examples = []
411
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
506 col_try_type = try_features[col] if try_features is not None and col in try_features else None
507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 508 arrays.append(pa.array(typed_sequence))
509 inferred_features[col] = typed_sequence.get_inferred_type()
510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
180 else:
181 trying_cast_to_python_objects = True
--> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
183 # use smaller integer precisions if possible
184 if self.trying_int_optimization:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
OverflowError: Python int too large to convert to C long
```
However, that dataset can be loaded in a streaming manner:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', streaming=True)
for i in dataset:
pass # it work well
```
Another issue is reported in our dataset hub:
https://huggingface.co/datasets/liwu/MNBVC/discussions/2
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
### Expected behavior
the dataset can be safely loaded
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9
- Python version: 3.6.8
- PyArrow version: 6.0.1
- Pandas version: 1.1.5
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6007/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6007/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6006
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6006/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6006/events
|
https://github.com/huggingface/datasets/issues/6006
| 1,788,855,582
|
I_kwDODunzps5qn8Ue
| 6,006
|
NotADirectoryError when loading gigawords
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.github.com/users/xipq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xipq",
"id": 115634163,
"login": "xipq",
"node_id": "U_kgDOBuRv8w",
"organizations_url": "https://api.github.com/users/xipq/orgs",
"received_events_url": "https://api.github.com/users/xipq/received_events",
"repos_url": "https://api.github.com/users/xipq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xipq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xipq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] |
2023-07-05T06:23:41Z
|
2023-07-05T06:31:02Z
|
2023-07-05T06:31:01Z
|
NONE
| null | null | null | null |
### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): [0/1862]
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single
for key, record in generator:
File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b
64efb424b6/gigaword.py", line 115, in _generate_examples
with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s:
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope
n
return open(main_hop, mode, *args, **kwargs)
NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e
89780c4be7599794a4f559048ec/org_data/train.src.txt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gigaword.py", line 38, in <module>
main()
File "gigaword.py", line 35, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data
dataset = self.load_dataset()
File "gigaword.py", line 29, in load_dataset
return datasets.load_dataset('gigaword')
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Download and process the dataset successfully
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.github.com/users/xipq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xipq",
"id": 115634163,
"login": "xipq",
"node_id": "U_kgDOBuRv8w",
"organizations_url": "https://api.github.com/users/xipq/orgs",
"received_events_url": "https://api.github.com/users/xipq/received_events",
"repos_url": "https://api.github.com/users/xipq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xipq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xipq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6006/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6003
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6003/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6003/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6003/events
|
https://github.com/huggingface/datasets/issues/6003
| 1,786,554,110
|
I_kwDODunzps5qfKb-
| 6,003
|
interleave_datasets & DataCollatorForLanguageModeling having a conflict ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1929830?v=4",
"events_url": "https://api.github.com/users/PonteIneptique/events{/privacy}",
"followers_url": "https://api.github.com/users/PonteIneptique/followers",
"following_url": "https://api.github.com/users/PonteIneptique/following{/other_user}",
"gists_url": "https://api.github.com/users/PonteIneptique/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PonteIneptique",
"id": 1929830,
"login": "PonteIneptique",
"node_id": "MDQ6VXNlcjE5Mjk4MzA=",
"organizations_url": "https://api.github.com/users/PonteIneptique/orgs",
"received_events_url": "https://api.github.com/users/PonteIneptique/received_events",
"repos_url": "https://api.github.com/users/PonteIneptique/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PonteIneptique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PonteIneptique/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PonteIneptique",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2023-07-03T17:15:31Z
|
2023-07-03T17:15:31Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6003/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6003/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5999
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5999/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5999/events
|
https://github.com/huggingface/datasets/issues/5999
| 1,781,851,513
|
I_kwDODunzps5qNOV5
| 5,999
|
Getting a 409 error while loading xglue dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4",
"events_url": "https://api.github.com/users/Praful932/events{/privacy}",
"followers_url": "https://api.github.com/users/Praful932/followers",
"following_url": "https://api.github.com/users/Praful932/following{/other_user}",
"gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Praful932",
"id": 45713796,
"login": "Praful932",
"node_id": "MDQ6VXNlcjQ1NzEzNzk2",
"organizations_url": "https://api.github.com/users/Praful932/orgs",
"received_events_url": "https://api.github.com/users/Praful932/received_events",
"repos_url": "https://api.github.com/users/Praful932/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Praful932/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Praful932",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https://huggingface.co/datasets/xglue/discussions/5"
] |
2023-06-30T04:13:54Z
|
2023-06-30T05:57:23Z
|
2023-06-30T05:57:22Z
|
NONE
| null | null | null | null |
### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the dataset to load
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5999/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5998
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5998/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5998/events
|
https://github.com/huggingface/datasets/issues/5998
| 1,781,805,018
|
I_kwDODunzps5qNC_a
| 5,998
|
The current implementation has a potential bug in the sort method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4",
"events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}",
"followers_url": "https://api.github.com/users/wangyuxinwhy/followers",
"following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}",
"gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangyuxinwhy",
"id": 22192665,
"login": "wangyuxinwhy",
"node_id": "MDQ6VXNlcjIyMTkyNjY1",
"organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs",
"received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events",
"repos_url": "https://api.github.com/users/wangyuxinwhy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangyuxinwhy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting, @wangyuxinwhy. "
] |
2023-06-30T03:16:57Z
|
2023-06-30T14:21:03Z
|
2023-06-30T14:11:25Z
|
NONE
| null | null | null | null |
### Describe the bug
In the sort method๏ผhere's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed.
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
Of course, after I modified the tuple into a list, everything worked fine
Change the code to the following so there will be no problem
```python
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
if isinstance(column_names, str):
column_names = [column_names]
else:
column_names = list(column_names)
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('glue', 'ax')['test']
dataset.sort(column_names=('premise', 'hypothesis'))
# Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset.
```
### Expected behavior
Passing tuple into column_names should be equivalent to passing list
### Environment info
- `datasets` version: 2.13.0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5998/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5997
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5997/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5997/events
|
https://github.com/huggingface/datasets/issues/5997
| 1,781,582,818
|
I_kwDODunzps5qMMvi
| 5,997
|
extend the map function so it can wrap around long text that does not fit in the context window
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4",
"events_url": "https://api.github.com/users/siddhsql/events{/privacy}",
"followers_url": "https://api.github.com/users/siddhsql/followers",
"following_url": "https://api.github.com/users/siddhsql/following{/other_user}",
"gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/siddhsql",
"id": 127623723,
"login": "siddhsql",
"node_id": "U_kgDOB5tiKw",
"organizations_url": "https://api.github.com/users/siddhsql/orgs",
"received_events_url": "https://api.github.com/users/siddhsql/received_events",
"repos_url": "https://api.github.com/users/siddhsql/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions",
"type": "User",
"url": "https://api.github.com/users/siddhsql",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"I just noticed the [docs](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then.",
"All the values in a batch must be of the same length. So one solution is dropping all the input columns:\r\n```python\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nAnother is padding/transforming the input columns to the tokenizer output's length (447). "
] |
2023-06-29T22:15:21Z
|
2023-07-03T17:58:52Z
| null |
NONE
| null | null | null | null |
### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530):
```
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
```
but running the code gives me this error:
```
File "/llm/fine-tune.py", line 117, in <module>
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single
writer.write_batch(batch)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447
```
The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.
### Motivation
please see above
### Your contribution
I'm afraid I don't have much knowledge to help
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5997/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5993
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5993/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5993/events
|
https://github.com/huggingface/datasets/issues/5993
| 1,776,643,555
|
I_kwDODunzps5p5W3j
| 5,993
|
ValueError: Table schema does not match schema used to create file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! ๐ "
] |
2023-06-27T10:54:07Z
|
2023-06-27T15:36:42Z
|
2023-06-27T15:32:44Z
|
NONE
| null | null | null | null |
### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written ๐
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5993/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5991/events
|
https://github.com/huggingface/datasets/issues/5991
| 1,774,456,518
|
I_kwDODunzps5pxA7G
| 5,991
|
`map` with any joblib backend
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"#self-assign\n\nHi @lhoestq ๐๐ผ\n\nIโd like to work on this!\n\nPlanning to support progress tracking with `map()` using any joblib backend (like \"loky\") by replacing the Queue-based approach in `iflatmap_unordered` with a file-based progress tracking mechanism (e.g. shared temp file with periodic updates).\n\nThis would allow the progress bar to work even in backends where inter-process Queues aren't supported. Let me know if this sounds good โ Iโll get started!\n",
"I think ideally it should have a general solution, since some joblib backends don't have a shared filesystem"
] |
2023-06-26T10:33:42Z
|
2025-09-04T10:43:06Z
| null |
MEMBER
| null | null | null | null |
We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.
Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process.
If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar !
Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5991/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5991/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5989
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5989/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5989/events
|
https://github.com/huggingface/datasets/issues/5989
| 1,774,134,091
|
I_kwDODunzps5pvyNL
| 5,989
|
Set a rule on the config and split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?",
"See a report where the datasets server fails: https://huggingface.co/datasets/poloclub/diffusiondb/discussions/2#6374ff55b93cbdf65675f564\r\n\r\nThe config name is `random_10k [2m]`!"
] |
2023-06-26T07:34:14Z
|
2023-07-19T14:22:54Z
| null |
COLLABORATOR
| null | null | null | null |
> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5989/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5988
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5988/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5988/events
|
https://github.com/huggingface/datasets/issues/5988
| 1,773,257,828
|
I_kwDODunzps5pscRk
| 5,988
|
ConnectionError: Couldn't reach dataset_infos.json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20674868?v=4",
"events_url": "https://api.github.com/users/yulingao/events{/privacy}",
"followers_url": "https://api.github.com/users/yulingao/followers",
"following_url": "https://api.github.com/users/yulingao/following{/other_user}",
"gists_url": "https://api.github.com/users/yulingao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yulingao",
"id": 20674868,
"login": "yulingao",
"node_id": "MDQ6VXNlcjIwNjc0ODY4",
"organizations_url": "https://api.github.com/users/yulingao/orgs",
"received_events_url": "https://api.github.com/users/yulingao/received_events",
"repos_url": "https://api.github.com/users/yulingao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yulingao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yulingao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yulingao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provide more info about your network (region, proxies, etc.)?"
] |
2023-06-25T12:39:31Z
|
2023-07-07T13:20:57Z
|
2023-07-07T13:20:57Z
|
NONE
| null | null | null | null |
### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))
### Steps to reproduce the bug
train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train')
### Expected behavior
download the dataset
### Environment info
centos7
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5988/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5987
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5987/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5987/events
|
https://github.com/huggingface/datasets/issues/5987
| 1,773,047,909
|
I_kwDODunzps5prpBl
| 5,987
|
Why max_shard_size is not supported in load_dataset and passed to download_and_prepare
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.",
"In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure blob or hdfs which may automatically download the shard from background)",
"But `load_dataset` doesn't support caching (and reading) Arrow datasets from remote storage. \r\n\r\n`load_datset_builder` + `download_and_prepare` is not equal to `load_dataset`. The latter has one more step, `builder.as_dataset`, that memory-maps Arrow files, which only works for local files.",
"Thanks. So if I want to use `IterableDataset` and control the size of single arrow file, how should I organize the data loader? Maybe `load_dataset_build` + `download_and_prepare` + `builder.as_dataset` + `dataset.to_iterable_dataset`?",
"Yes, this should work.\r\n\r\nI think we can expose `max_shard_size` in `load_dataset`, so feel free to open a PR."
] |
2023-06-25T04:19:13Z
|
2023-06-29T16:06:08Z
|
2023-06-29T16:06:08Z
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
### Expected behavior
Users can define the max shard size.
### Environment info
datasets==2.13.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5987/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5985/events
|
https://github.com/huggingface/datasets/issues/5985
| 1,771,588,158
|
I_kwDODunzps5pmEo-
| 5,985
|
Cannot reuse tokenizer object for dataset map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikigenius",
"id": 12724810,
"login": "vikigenius",
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikigenius",
"user_view_type": "public"
}
|
[
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] | null |
[
"This is a known issue: https://github.com/huggingface/datasets/issues/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with the same set of parameters as the ones in the map transform)",
"Closing since this is a duplicate"
] |
2023-06-23T14:45:31Z
|
2023-07-21T14:09:14Z
|
2023-07-21T14:09:14Z
|
NONE
| null | null | null | null |
### Describe the bug
Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both.
Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same.
But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change.
### Steps to reproduce the bug
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
t.save_pretrained("tok1")
th1 = hash(dumps(t))
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
t.save_pretrained("tok2")
th2 = hash(dumps(t))
assert th1 == th2 # Assertion Error
```
But if you use just the hash of the object without dumps, the hashes don't change
```python
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
t = AutoTokenizer.from_pretrained('bert-base-uncased')
th1 = hash(t) # Just hash no dumps
text = "This is an example text"
ttext = t(text, max_length=512, padding="max_length", truncation=True)
th2 = hash(t) # Just hash no dumps
assert th1 == th2 # This is OK
```
This causes situations such as the following
1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt`
```python
from transformers import AutoTokenizer
import datasets
class TokenizeMapper(object):
"""Mapper for tokenizer.
This is needed because the caching mechanism of HuggingFace does not work on
lambdas. Each time a new lambda will be created by a new process which will
lead to a different hash.
This way we can have a universal mapper object in init and reuse it with the same
hash for each process.
"""
def __init__(self, tokenizer):
"""Initialize the tokenizer."""
self.tokenizer = tokenizer
def __call__(self, examples, **kwargs):
"""Run the mapper."""
texts = examples["text"]
tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True)
batch_outputs = {
"input_ids": tt.input_ids,
"attention_mask": tt.attention_mask,
}
return batch_outputs
t = AutoTokenizer.from_pretrained('bert-base-uncased')
mapper = TokenizeMapper(t)
ds = datasets.load_dataset("text", data_files="lines.txt")
mds1 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
mds2 = ds.map(
mapper,
batched=False,
remove_columns=["text"],
).with_format("torch")
```
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Expected behavior
We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset.
The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.31_1-x86_64-with-glibc2.36
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5985/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5984
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5984/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5984/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5984/events
|
https://github.com/huggingface/datasets/issues/5984
| 1,771,571,458
|
I_kwDODunzps5pmAkC
| 5,984
|
AutoSharding IterableDataset's when num_workers > 1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4",
"events_url": "https://api.github.com/users/mathephysicist/events{/privacy}",
"followers_url": "https://api.github.com/users/mathephysicist/followers",
"following_url": "https://api.github.com/users/mathephysicist/following{/other_user}",
"gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mathephysicist",
"id": 25594384,
"login": "mathephysicist",
"node_id": "MDQ6VXNlcjI1NTk0Mzg0",
"organizations_url": "https://api.github.com/users/mathephysicist/orgs",
"received_events_url": "https://api.github.com/users/mathephysicist/received_events",
"repos_url": "https://api.github.com/users/mathephysicist/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mathephysicist",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n\r\nPS: I don't expect significant speed-up for local, uncompressed Arrow files.",
"Alternatively we could support multiprocessing map for iterable datasets and let the user do the CPU intensive task there ?\r\n\r\nThis way it would work on arrow data but also on any iterable dataset",
"> For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n> \r\n> @lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n> \r\n> PS: I don't expect significant speed-up for local, uncompressed Arrow files.\r\n\r\nCould you explain why you'd need to change the arrow format?\r\n\r\nWhen we use streaming datasets we simply determine the number of worker shards and then add some modulo logic at the appropriate place. Worst case scenario, you'd skip streaming entries according to the number of shards.\r\n\r\nFor PyTorch, I'd be happy to provide an implementation or a sketch thereof, if you point me toward what the testing requirements would be for such a PR.",
"> Could you explain why you'd need to change the arrow format?\r\n\r\nThis way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.",
"> > Could you explain why you'd need to change the arrow format?\r\n> \r\n> This way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.\r\n\r\nI guess I don't understand why you'd need to subset the dataset in the first place. \r\nIt seems sufficient to figure out how to offset or skip rows.\r\n\r\nFor instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\nThat's one way to do it, where of course you'd need to account for gpu sharding as well.\r\n\r\n\r\nOtherwise, how did you implement worker/node/GPU sharding for iterable/streaming data where you do not have index information or prior splits (e.g. files)?",
"> For instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\n\r\nThat works indeed ! And what we meant is that you can make it even faster to instantiate. Indeed using RecordBatchStreamReader you need to get the list of all the record batches in each worker, whereas you could just get the list of record batches per worker if you use the record batches locations in the Arrow IPC file footer. This would be especially appreciated to have a fast instantiation in case you have tens of thousands of Arrow files for example.",
"Any recent updates on this ? ",
"I would also appreciate this feature"
] |
2023-06-23T14:34:20Z
|
2024-03-22T15:01:14Z
| null |
NONE
| null | null | null | null |
### Feature request
Minimal Example
```
import torch
from datasets import IterableDataset
d = IterableDataset.from_file(<file_name>)
dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3)
for sample in dl:
print(sample)
```
Warning:
Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1.
Expected Behavior:
Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving)
### Motivation
I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers.
### Your contribution
If someone points me to what needs to change, I can create a PR.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5984/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5984/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5982
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5982/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5982/events
|
https://github.com/huggingface/datasets/issues/5982
| 1,770,333,296
|
I_kwDODunzps5phSRw
| 5,982
|
404 on Datasets Documentation Page
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/118509387?v=4",
"events_url": "https://api.github.com/users/kmulka-bloomberg/events{/privacy}",
"followers_url": "https://api.github.com/users/kmulka-bloomberg/followers",
"following_url": "https://api.github.com/users/kmulka-bloomberg/following{/other_user}",
"gists_url": "https://api.github.com/users/kmulka-bloomberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kmulka-bloomberg",
"id": 118509387,
"login": "kmulka-bloomberg",
"node_id": "U_kgDOBxBPSw",
"organizations_url": "https://api.github.com/users/kmulka-bloomberg/orgs",
"received_events_url": "https://api.github.com/users/kmulka-bloomberg/received_events",
"repos_url": "https://api.github.com/users/kmulka-bloomberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kmulka-bloomberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kmulka-bloomberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kmulka-bloomberg",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This wasnโt working for me a bit earlier, but it looks to be back up now",
"We had a minor issue updating the docs after the latest release. It should work now :)."
] |
2023-06-22T20:14:57Z
|
2023-06-26T15:45:03Z
|
2023-06-26T15:45:03Z
|
NONE
| null | null | null | null |
### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5982/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5981
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5981/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5981/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5981/events
|
https://github.com/huggingface/datasets/issues/5981
| 1,770,310,087
|
I_kwDODunzps5phMnH
| 5,981
|
Only two cores are getting used in sagemaker with pytorch 3.10 kernel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/107141022?v=4",
"events_url": "https://api.github.com/users/mmr-crexi/events{/privacy}",
"followers_url": "https://api.github.com/users/mmr-crexi/followers",
"following_url": "https://api.github.com/users/mmr-crexi/following{/other_user}",
"gists_url": "https://api.github.com/users/mmr-crexi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mmr-crexi",
"id": 107141022,
"login": "mmr-crexi",
"node_id": "U_kgDOBmLXng",
"organizations_url": "https://api.github.com/users/mmr-crexi/orgs",
"received_events_url": "https://api.github.com/users/mmr-crexi/received_events",
"repos_url": "https://api.github.com/users/mmr-crexi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mmr-crexi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmr-crexi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mmr-crexi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https://github.com/pytorch/pytorch/issues/99625",
"From reading that ticket, it may be down in mkl? Is it worth hotfixing in the meantime, with the express intention of turning it off? I know that's a horribly crufty solution, but it's also deeply frustrating to be limited to 2 cores for operations as simple as filtration.",
"This is too specific and unrelated to `datasets`, so this shouldn't be fixed here.",
"@mariosasko @mmr-crexi I had the exact same problem on my kubernetes cluster. the datasets subprocess only user 1 and 17 core"
] |
2023-06-22T19:57:31Z
|
2023-10-30T06:17:40Z
|
2023-07-24T11:54:52Z
|
NONE
| null | null | null | null |
### Describe the bug
When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.
We have solved this in our own code by placing the following snippet in the code that is called inside subprocesses:
```os.sched_setaffinity(0, {i for i in range(1000)})```
The problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask ("0xfffff" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop.

When running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active.
### Steps to reproduce the bug
Repro steps:
1. Create an aws sagemaker instance
2. use the pytorch 3_10 kernel
3. Load a dataset
4. run a filter operation
5. watch as only 2 cores are used when num_proc > 2
6. run a map operation
7. watch as only 2 cores are used when num_proc > 2
8. run a map operation with processor affinity reset inside the function called via map
9. Watch as all cores run
### Expected behavior
All specified cores are used via the num_proc argument.
### Environment info
AWS sagemaker with the following init script run in the terminal after instance creation:
conda init bash
bash
conda activate pytorch_p310
pip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
sudo yum -y install htop
sudo yum -y update
sudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5981/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5981/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5980
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5980/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5980/events
|
https://github.com/huggingface/datasets/issues/5980
| 1,770,255,973
|
I_kwDODunzps5pg_Zl
| 5,980
|
Viewing dataset card returns โ502 Bad Gatewayโ
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tbenthompson",
"id": 4241811,
"login": "tbenthompson",
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tbenthompson",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Can you try again? Maybe there was a minor outage.",
"Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ",
"we fixed something on the server side, glad it's fixed now"
] |
2023-06-22T19:14:48Z
|
2023-06-27T08:38:19Z
|
2023-06-26T14:42:45Z
|
NONE
| null | null | null | null |
The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the โFiles and versionsโ tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tbenthompson",
"id": 4241811,
"login": "tbenthompson",
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tbenthompson",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5980/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5975
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5975/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5975/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5975/events
|
https://github.com/huggingface/datasets/issues/5975
| 1,768,271,343
|
I_kwDODunzps5pZa3v
| 5,975
|
Streaming Dataset behind Proxy - FileNotFoundError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4",
"events_url": "https://api.github.com/users/Veluchs/events{/privacy}",
"followers_url": "https://api.github.com/users/Veluchs/followers",
"following_url": "https://api.github.com/users/Veluchs/following{/other_user}",
"gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Veluchs",
"id": 135350576,
"login": "Veluchs",
"node_id": "U_kgDOCBFJMA",
"organizations_url": "https://api.github.com/users/Veluchs/orgs",
"received_events_url": "https://api.github.com/users/Veluchs/received_events",
"repos_url": "https://api.github.com/users/Veluchs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Veluchs",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Duplicate of #",
"Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables",
"Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PROXY'] = \"http://example.com:xxxx\" \r\nos.environ['HTTPS_PROXY'] = \"http://example.com:xxxx\" \r\n`\r\n\r\nHowever, I still get the same error.\r\n\r\nOne thing that could be helpfull: When downloading a dataset without streaming i get the following message:\r\n_HF google storage unreachable. Downloading and preparing it from source_.\r\nThe download does however work as expected.\r\n",
"Are you able to use `aiohttp` to get the file at `https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json` using your proxy ?",
"It only works when passing trust_env=True when creating the ClientSession, as well as setting ssl=False.\r\n\r\nWorking Example:\r\n\r\n```\r\nimport os\r\n\r\nos.environ['HTTP_PROXY'] = \"xyz\"\r\nos.environ['HTTPS_PROXY'] = \"xyz\"\r\n\r\nimport asyncio\r\nimport aiohttp\r\n\r\nasync def download_pep(url):\r\n async with aiohttp.ClientSession(trust_env=True) as session:\r\n print(\"1\")\r\n async with session.get(url, ssl=False) as resp:\r\n print(\"2\")\r\n content = await resp.text()\r\n print(content)\r\n return content\r\n\r\nasyncio.run(download_pep(\"https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\"))\r\n```\r\n\r\n\r\n\r\nSSL Verification has been a problem with other packages as well. Usually I circumvent the problem by setting\r\n```\r\nimport ssl\r\nssl._create_default_https_context = ssl._create_unverified_context\r\n```\r\n(probably not the best idea for security), although here aiohttp does not seem to use this default context.",
"We do pass `trust_env` as well. Could you share the full stack trace you get when streaming using `datasets` ? That could help locate where we might have forgotten to pass `trust_env`",
"Is there a way to disable ssl verification when streaming a dataset. I suspect this might be the isssue with my proxy.\r\n\r\n\r\nHere you go:\r\n\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[8], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 ds = load_dataset(\"facebook/voxpopuli\", name=\"de\", streaming=True)\r\n 5 sample = next(iter(ds))\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281), in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1274 dl_manager = StreamingDownloadManager(\r\n 1275 base_path=base_path or self.base_path,\r\n 1276 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1277 dataset_name=self.name,\r\n 1278 data_dir=self.config.data_dir,\r\n 1279 )\r\n 1280 self._check_manual_download(dl_manager)\r\n-> 1281 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1282 # By default, return all splits\r\n 1283 if split is None:\r\n\r\nFile [~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120), in Voxpopuli._split_generators(self, dl_manager)\r\n 118 def _split_generators(self, dl_manager):\r\n 119 n_shards_path = dl_manager.download_and_extract(_N_SHARDS_FILE)\r\n--> 120 with open(n_shards_path) as f:\r\n 121 n_shards = json.load(f)\r\n 123 if self.config.name == \"en_accented\":\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71), in extend_module_for_streaming..wrap_auth..wrapper(*args, **kwargs)\r\n 69 @wraps(function)\r\n 70 def wrapper(*args, **kwargs):\r\n---> 71 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517), in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 515 except FileNotFoundError:\r\n 516 if file.startswith(config.HF_ENDPOINT):\r\n--> 517 raise FileNotFoundError(\r\n 518 file + \"\\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\"\r\n 519 ) from None\r\n 520 else:\r\n 521 raise\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```",
"> Is there a way to disable ssl verification when streaming a dataset.\r\n\r\nI don't think so.\r\n\r\nWe use `fsspec` HTTPFileSystem implementation that is based on `aiohttp`. If you register a subclass of HTTPFileSystem that has SSL disabled by default it could work, but I wouldn't recommended it because it can raise security issues.",
"Okay thanks for your help! I guess I have to figure out how to improve the proxy environment / see if I can make it work with ssl connections."
] |
2023-06-21T19:10:02Z
|
2023-06-30T05:55:39Z
|
2023-06-30T05:55:38Z
|
NONE
| null | null | null | null |
### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4",
"events_url": "https://api.github.com/users/Veluchs/events{/privacy}",
"followers_url": "https://api.github.com/users/Veluchs/followers",
"following_url": "https://api.github.com/users/Veluchs/following{/other_user}",
"gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Veluchs",
"id": 135350576,
"login": "Veluchs",
"node_id": "U_kgDOCBFJMA",
"organizations_url": "https://api.github.com/users/Veluchs/orgs",
"received_events_url": "https://api.github.com/users/Veluchs/received_events",
"repos_url": "https://api.github.com/users/Veluchs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Veluchs",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5975/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5975/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5971
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5971/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5971/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5971/events
|
https://github.com/huggingface/datasets/issues/5971
| 1,767,053,635
|
I_kwDODunzps5pUxlD
| 5,971
|
Docs: make "repository structure" easier to find
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/benjaminbrown038",
"id": 35114142,
"login": "benjaminbrown038",
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"type": "User",
"url": "https://api.github.com/users/benjaminbrown038",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4",
"events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}",
"followers_url": "https://api.github.com/users/benjaminbrown038/followers",
"following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/benjaminbrown038",
"id": 35114142,
"login": "benjaminbrown038",
"node_id": "MDQ6VXNlcjM1MTE0MTQy",
"organizations_url": "https://api.github.com/users/benjaminbrown038/orgs",
"received_events_url": "https://api.github.com/users/benjaminbrown038/received_events",
"repos_url": "https://api.github.com/users/benjaminbrown038/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions",
"type": "User",
"url": "https://api.github.com/users/benjaminbrown038",
"user_view_type": "public"
}
] | null |
[
"Loading a local dataset also works the same way when `data_files` are not specified, so I agree we should make this info easier to discover \r\n\r\ncc @stevhliu ",
"Is this issue open? If so, I will self assign. ",
"@benjaminbrown038 Yes, it is. Maybe @stevhliu can give some pointers on improving this doc page's discoverability.",
"I think we can add a version of the [Main use-case](https://huggingface.co/docs/datasets/repository_structure#main-usecase) section to the [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset) tutorial. \r\n\r\nCurrently, it doesn't tell you *how* to structure the repository; it only tells you how to create it. So adding the \"main use-case\" will help bridge the gap and make it easier to find. We should also add a link to the [Structure your repository](https://huggingface.co/docs/datasets/repository_structure) guide for users who want to learn about the other options.",
"#self-assign"
] |
2023-06-21T08:26:44Z
|
2023-07-05T06:51:38Z
| null |
COLLABORATOR
| null | null | null | null |
The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5971/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5971/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5970
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5970/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5970/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5970/events
|
https://github.com/huggingface/datasets/issues/5970
| 1,766,010,356
|
I_kwDODunzps5pQy30
| 5,970
|
description disappearing from Info when Uploading a Dataset Created with `from_dict`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/20377292?v=4",
"events_url": "https://api.github.com/users/balisujohn/events{/privacy}",
"followers_url": "https://api.github.com/users/balisujohn/followers",
"following_url": "https://api.github.com/users/balisujohn/following{/other_user}",
"gists_url": "https://api.github.com/users/balisujohn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/balisujohn",
"id": 20377292,
"login": "balisujohn",
"node_id": "MDQ6VXNlcjIwMzc3Mjky",
"organizations_url": "https://api.github.com/users/balisujohn/orgs",
"received_events_url": "https://api.github.com/users/balisujohn/received_events",
"repos_url": "https://api.github.com/users/balisujohn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/balisujohn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/balisujohn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/balisujohn",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Here's a minimal way to reproduce the bug, for the sake of convenience.\r\n````\r\nfrom datasets import Dataset, DatasetInfo, load_dataset\r\n\r\n\r\nepisodes_dict = {\"test\":[1,2,3],\"test2\": [1,2,4]}\r\n\r\nhugging_face_dataset = Dataset.from_dict(\r\n episodes_dict, info=DatasetInfo(description=\"test_str\")\r\n)\r\nprint(hugging_face_dataset.info)\r\n\r\nhugging_face_dataset.push_to_hub(\"balisujohn/minari_test\", private=True)\r\n\r\nredownloaded_dataset= load_dataset(\"balisujohn/minari_test\")[\"train\"]\r\n\r\n\r\nprint(redownloaded_dataset.info)\r\n````\r\n",
"Thanks for reporting !\r\n\r\nFor now I would recommend uploading a separate JSON file for your metadata.\r\n\r\nAlternatively you can upload a second configuration of the dataset containing your metadata but this feature is not released yet (though you can already use it from [here](https://github.com/huggingface/datasets/pull/5331), it will be released soon)"
] |
2023-06-20T19:18:26Z
|
2023-06-22T14:23:56Z
| null |
NONE
| null | null | null | null |
### Describe the bug
When uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download.
### Steps to reproduce the bug
I think the most relevant pattern in the code might be the following lines:
```
description_json_str = json.dumps(
{
"dataset_id": dataset.spec.dataset_id,
"env_name": dataset.spec.env_spec.id,
"action_space": serialize_space(dataset.spec.action_space),
"observation_space": serialize_space(dataset.spec.observation_space),
}
)
hugging_face_dataset = Dataset.from_dict(
episodes_dict, info=DatasetInfo(description=description_json_str)
)
```
Which comes from this function https://github.com/balisujohn/minarai/blob/8e023727f0a8488c4451651d9f7a79b981412c40/minari/integrations/hugging_face.py#L39
To replicate,
clone this branch of my Minari fork https://github.com/balisujohn/minarai/tree/dev-huggingface then run
```
python3.8 -m venv env
source env/bin/activate
python3 -m pip install -e .
python3 -m pip install pytest
```
The change the hugging face repo path in the test called `test_hugging_face_push_and_pull_dataset` in `tests/integrations/test_hugging_face.py` to one you have permissions to write to.
Then run:
```
pytest tests/integrations/test_hugging_face.py::test_hugging_face_push_and_pull_dataset
```
### Expected behavior
DATASET INFO BEFORE UPLOADING
DatasetInfo(description='{"dataset_id": "dummy-combo-test-v0", "env_name": "DummyComboEnv-v0", "action_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}]}", "observation_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"component_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [-1.0], \\"high\\": [1.0]}, \\"component_2\\": {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"subcomponent_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, \\"subcomponent_2\\": {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}, {\\"type\\": \\"Discrete\\", \\"dtype\\": \\"int64\\", \\"start\\": 0, \\"n\\": 10}]}}}}}]}]}"}', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)
...
DATASET INFO AFTER UPLOADING AND DOWNLOADING
DatasetInfo(description='', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits={'train': SplitInfo(name='train', num_bytes=4846, num_examples=60, shard_lengths=None, dataset_name='parquet')}, download_checksums={'https://huggingface.co/datasets/balisujohn/minari_test/resolve/8217b614ff9ba5edc1a30c7df430e92a46f65363/data/train-00000-of-00001-7c5900b93b35745e.parquet': {'num_bytes': 9052, 'checksum': None}}, download_size=9052, post_processing_size=None, dataset_size=4846, size_in_bytes=13898)
...
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5970/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5970/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5968
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5968/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5968/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5968/events
|
https://github.com/huggingface/datasets/issues/5968
| 1,765,252,561
|
I_kwDODunzps5pN53R
| 5,968
|
Common Voice datasets still need `use_auth_token=True`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc @pcuenca as well. \r\n\r\nNot super urgent btw",
"The issue commes from the dataset itself and is not related to the `datasets` lib\r\n\r\nsee https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1/blob/2c475b3b88e0f2e5828f830a4b91618a25ff20b7/common_voice_6_1.py#L148-L152",
"Let's remove these lines in the dataset no? cc @anton-l @Vaibhavs10 ",
"Addressed in:\r\n\r\n* `mozilla-foundation/common_voice_1_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_1_0/discussions/4)\r\n* `mozilla-foundation/common_voice_2_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_2_0/discussions/3)\r\n* `mozilla-foundation/common_voice_3_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_3_0/discussions/3)\r\n* `mozilla-foundation/common_voice_4_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0/discussions/3)\r\n* `mozilla-foundation/common_voice_5_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_5_0/discussions/3)\r\n* `mozilla-foundation/common_voice_5_1` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_5_1/discussions/3)\r\n* `mozilla-foundation/common_voice_6_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_6_0/discussions/3)\r\n* `mozilla-foundation/common_voice_6_1` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1/discussions/3)\r\n* `mozilla-foundation/common_voice_7_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/discussions/3)\r\n* `mozilla-foundation/common_voice_8_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/discussions/7)\r\n* `mozilla-foundation/common_voice_9_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0/discussions/8)\r\n* `mozilla-foundation/common_voice_10_0` [PR](https://huggingface.co/datasets/mozilla-foundation/common_voice_10_0/discussions/7)"
] |
2023-06-20T11:58:37Z
|
2023-07-29T16:08:59Z
|
2023-07-29T16:08:58Z
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in.
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
However it throws an error - probably because something weird is hardcoded into the dataset loading script.
### Steps to reproduce the bug
1.)
```
huggingface-cli login
```
2.) Make sure that you have accepted the license here:
https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1
3.) Run:
```py
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation")
```
4.) You'll get:
```
File ~/hf/lib/python3.10/site-packages/datasets/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
961 split_dict = SplitDict(dataset_name=self.name)
962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
965 # Checksums verification
966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_1/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager)
148 hf_auth_token = dl_manager.download_config.use_auth_token
149 if hf_auth_token is None:
--> 150 raise ConnectionError(
151 "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
152 )
154 bundle_url_template = STATS["bundleURLTemplate"]
155 bundle_version = bundle_url_template.split("/")[0]
ConnectionError: Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset
```
### Expected behavior
One should not have to pass `use_auth_token=True`. Also see discussion here: https://github.com/huggingface/blog/pull/1243#discussion_r1235131150
### Environment info
```
- `datasets` version: 2.13.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5968/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5968/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5967
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5967/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5967/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5967/events
|
https://github.com/huggingface/datasets/issues/5967
| 1,763,926,520
|
I_kwDODunzps5pI2H4
| 5,967
|
Config name / split name lost after map with multiproc
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_name and split",
"That sounds like a clean workaround!"
] |
2023-06-19T17:27:36Z
|
2023-06-28T08:55:25Z
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
from transformers import AutoFeatureExtractor
import numpy as np
# load dummy dataset
libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
# make train / test splits
libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1)
# example feature extractor
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)
sampling_rate = feature_extractor.sampling_rate
libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate))
max_duration = 30.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
# single proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1
)
print(10 * "=" ,"Single processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
# multi proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2
)
print(10 * "=" ,"Multi processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
```
**Print Output:**
```
========== Single processing ==========
Config name before: clean Split name before: validation
Config name after: clean Split name after: validation
========== Multi processing ==========
Config name before: clean Split name before: validation
Config name after: None Split name after: None
```
=> we can see that the config/split names are lost in the multiprocessing setting
### Expected behavior
Should retain both config / split names in the multiproc setting
### Environment info
- `datasets` version: 2.13.1.dev0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5967/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5967/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5965
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5965/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5965/events
|
https://github.com/huggingface/datasets/issues/5965
| 1,763,648,540
|
I_kwDODunzps5pHyQc
| 5,965
|
"Couldn't cast array of type" in complex datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting! \r\n\r\nSpecifying the target features explicitly should avoid this error:\r\n```python\r\ndataset = dataset.map(\r\n batch_process,\r\n batched=True,\r\n batch_size=1,\r\n num_proc=1,\r\n remove_columns=dataset.column_names,\r\n features=datasets.Features({\"texts\": datasets.Sequence(datasets.Value(\"string\"))})\r\n)\r\n```\r\n\r\nThis error stems from our type promotion not handling the nested case. But this promotion/casting allocates memory in most scenarios, which can be problematic for large datasets, so explicitly passing the features is the optimal solution.",
"Hi @mariosasko thanks for the context, this is helpful to know. Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nFeels like something that would be easy to implement and could save memory / deal with this case in a standardized way.",
"> . Would it be worth having some logic to generate this explicit feature specification automatically if a type annotation for a .map returns a dataclass that can be inferred?\r\n\r\nInteresting proposal! Yes, we could consider doing this if the (return) type hint is `TypedDict`, and raise an error that type hints are incorrect if the cast using the inferred types fails.",
"@mariosasko Put up an initial PR to implement this proposal. Let me know your thoughts on direction and what else should be in-scope here."
] |
2023-06-19T14:16:14Z
|
2023-07-26T15:13:53Z
|
2023-07-26T15:13:53Z
|
NONE
| null | null | null | null |
### Describe the bug
When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value.
This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level.
Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided.
### Steps to reproduce the bug
A trivial reproduction case:
```python
from typing import Iterator, Any
import pandas as pd
from datasets import Dataset
def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]:
for i in range(next(iter(lengths))):
yield {feature: values[i] for feature, values in batch.items()}
def examples_to_batch(examples) -> dict[str, list[Any]]:
batch = {}
for example in examples:
for feature, value in example.items():
if feature not in batch:
batch[feature] = []
batch[feature].append(value)
return batch
def batch_process(examples, explicit_schema: bool):
new_examples = []
for example in batch_to_examples(examples):
new_examples.append(dict(texts=example["raw_text"].split()))
return examples_to_batch(new_examples)
df = pd.DataFrame(
[
{"raw_text": ""},
{"raw_text": "This is a test"},
{"raw_text": "This is another test"},
]
)
dataset = Dataset.from_pandas(df)
# datasets won't be able to typehint a dataset that starts with an empty example.
with pytest.raises(TypeError, match="Couldn't cast array of type"):
dataset = dataset.map(
batch_process,
batched=True,
batch_size=1,
num_proc=1,
remove_columns=dataset.column_names,
)
```
This results in crashes like:
```bash
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type string to null
```
### Expected behavior
The code should successfully map and create a new dataset without error.
### Environment info
Mac OSX, Linux
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5965/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5963/events
|
https://github.com/huggingface/datasets/issues/5963
| 1,762,774,457
|
I_kwDODunzps5pEc25
| 5,963
|
Got an error _pickle.PicklingError use Dataset.from_spark.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/112800614?v=4",
"events_url": "https://api.github.com/users/yanzia12138/events{/privacy}",
"followers_url": "https://api.github.com/users/yanzia12138/followers",
"following_url": "https://api.github.com/users/yanzia12138/following{/other_user}",
"gists_url": "https://api.github.com/users/yanzia12138/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanzia12138",
"id": 112800614,
"login": "yanzia12138",
"node_id": "U_kgDOBrkzZg",
"organizations_url": "https://api.github.com/users/yanzia12138/orgs",
"received_events_url": "https://api.github.com/users/yanzia12138/received_events",
"repos_url": "https://api.github.com/users/yanzia12138/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanzia12138/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanzia12138/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanzia12138",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?",
"@lhoestq ",
"cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset builder, and therefore contains the SparkContext itself.\r\n\r\nI think it can be fixed by defining `create_cache_and_write_probe` outside the Spark dataset builder, and pass a `partial(create_cache_and_write_probe, cache_dir=self._cache_dir)` to `mapPartitions`",
"Just saw this; thanks for flagging! Your proposed solution sounds good. I can prepare a PR",
"@maddiedawson can you show me the demo ,so i can test in local .before your PR"
] |
2023-06-19T05:30:35Z
|
2023-07-24T11:55:46Z
|
2023-07-24T11:55:46Z
|
NONE
| null | null | null | null |
python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="/pnc-data/data/nuplan/t5_spark/cache_data")
ds.save_to_disk(args.output_data)
Error :
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma
tion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
_Originally posted by @yanzia12138 in https://github.com/huggingface/datasets/issues/5701#issuecomment-1594674306_
W
Traceback (most recent call last):
File "/home/work/main.py", line 100, in <module>
run(args)
File "/home/work/main.py", line 80, in run
ds = Dataset.from_spark(df1, keep_in_memory=True,
File "/home/work/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1281, in from_spark
return SparkDatasetReader(
File "/home/work/.local/lib/python3.9/site-packages/datasets/io/spark.py", line 53, in read
self.builder.download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 254, in _prepare_split
self._validate_cache_dir()
File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 122, in _validate_cache_dir
self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect()
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 950, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2951, in _jrdd
wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer,
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2830, in _wrap_function
pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2816, in _prepare_for_python_RDD
pickled_command = ser.dumps(command)
File "/home/work/.local/lib/python3.9/site-packages/pyspark/serializers.py", line 447, in dumps
raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S
parkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
23/06/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5963/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5963/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5962
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5962/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5962/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5962/events
|
https://github.com/huggingface/datasets/issues/5962
| 1,761,589,882
|
I_kwDODunzps5o_7p6
| 5,962
|
Issue with train_test_split maintaining the same underlying PyArrow Table
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/70730520?v=4",
"events_url": "https://api.github.com/users/Oziel14/events{/privacy}",
"followers_url": "https://api.github.com/users/Oziel14/followers",
"following_url": "https://api.github.com/users/Oziel14/following{/other_user}",
"gists_url": "https://api.github.com/users/Oziel14/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oziel14",
"id": 70730520,
"login": "Oziel14",
"node_id": "MDQ6VXNlcjcwNzMwNTIw",
"organizations_url": "https://api.github.com/users/Oziel14/orgs",
"received_events_url": "https://api.github.com/users/Oziel14/received_events",
"repos_url": "https://api.github.com/users/Oziel14/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oziel14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oziel14/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oziel14",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] |
2023-06-17T02:19:58Z
|
2023-06-17T02:19:58Z
| null |
NONE
| null | null | null | null |
### Describe the bug
I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.
### Steps to reproduce the bug
1. Load any dataset ```dataset = load_dataset("lhoestq/demo1")```
2. Try the next code:
```python
from datasets import Dataset, DatasetDict
train_size = 0.6
split_train = dataset["train"].train_test_split(
train_size=train_size,
)
separate_dataset_dict = DatasetDict({
"train": split_train["train"],
"test": split_train["test"],
})
```
3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.
4. But the next code:
```python
print(len(separate_dataset_dict["train"].data['id']))
print(len(separate_dataset_dict["test"].data['id']))
```
Indicates that both tables still have 5 rows.
### Expected behavior
However, I've noticed that train_test_split["train"].data, test_val_split["train"].data, and test_val_split["test"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.
I believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?
I would appreciate any assistance with this issue. Thank you.
### Environment info
I tried in Colab:
- `datasets` version: 2.13.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
and my PC:
- `datasets` version: 2.13.0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5962/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5962/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5961
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5961/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5961/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5961/events
|
https://github.com/huggingface/datasets/issues/5961
| 1,758,525,111
|
I_kwDODunzps5o0Pa3
| 5,961
|
IterableDataset: split by node and map may preprocess samples that will be skipped anyway
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4",
"events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}",
"followers_url": "https://api.github.com/users/johnchienbronci/followers",
"following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}",
"gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/johnchienbronci",
"id": 27708347,
"login": "johnchienbronci",
"node_id": "MDQ6VXNlcjI3NzA4MzQ3",
"organizations_url": "https://api.github.com/users/johnchienbronci/orgs",
"received_events_url": "https://api.github.com/users/johnchienbronci/received_events",
"repos_url": "https://api.github.com/users/johnchienbronci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/johnchienbronci",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Does \"number of shards\" refer to the total number of data?\r\n\r\nmy config:\r\nnproc_per_node=2\r\nds=ds['train'] = load_dataset(streaming=True).take(50000)\r\n\r\nI'm test again: in prepare_data(), data have the same for each GPU\r\n",
"The number of shards is `ds.n_shards`. It corresponds generally to the number of files the dataset is made of, to be able to distribute to several nodes.\r\n\r\n**You don't end up with the same data per GPU**. But all the samples are going through your preprocessing function you pass to map. They are just skipped afterwards to only keep 1 sample out of n(GPUs)",
"For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end. \r\nIs my understanding correct?\r\n\r\nWhere can I print the actual training data for each GPU?",
"> For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\nIs my understanding correct?\r\n\r\nYes exactly :)\r\n\r\n> Where can I print the actual training data for each GPU?\r\n\r\nYou should call print in the data_collator",
"I print out n_shards, and under multiple GPUs, this value is always 1.\r\nIs this value correct?",
"Yes it's correct, and it explains why you always have the same data passed to your map function (the data can't be split).\r\n\r\nBut after being passed to `map`, each GPU keeps one example out of n(GPUs) so that you don't end up with duplicate data across GPUs",
"> > For each GPU, although see the same data in prepare_data(), the actual training data will not be the same in the end.\r\n> > Is my understanding correct?\r\n> \r\n> Yes exactly :)\r\n> \r\n> > Where can I print the actual training data for each GPU?\r\n> \r\n> You should call print in the data_collator\r\n\r\nOK, when printing the train data in the data collator, each GPU sees different data.\r\n\r\nThanks for your reply",
"Do we have a solution for this one? Or it's required to get \"number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU\"",
"For now it's required to have a number of shards that is a factor of the number of GPUs to not have all the workers process the same data (and then skip the right ones to not end up training on duplicate data).\r\n\r\nIt would be quite complex to implement a strategy that would utilize all the GPUs with an arbitrary number of shards even at the end of training"
] |
2023-06-15T10:29:10Z
|
2023-09-01T10:35:11Z
| null |
NONE
| null | null | null | null |
There are two ways an iterable dataset can be split by node:
1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU
2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others.
In case 2. it's therefore possible to have the same examples passed to `prepare_dataset` for each GPU.
This doesn't sound optimized though, because it runs the preprocessing on samples that won't be used in the end.
Could you open a new issue so that we can discuss about this and find a solution ?
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/5360#issuecomment-1592729051_
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5961/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5961/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5959
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5959/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5959/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5959/events
|
https://github.com/huggingface/datasets/issues/5959
| 1,757,397,507
|
I_kwDODunzps5ov8ID
| 5,959
|
read metric glue.py from local file
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4",
"events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}",
"followers_url": "https://api.github.com/users/JiazhaoLi/followers",
"following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}",
"gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JiazhaoLi",
"id": 31148397,
"login": "JiazhaoLi",
"node_id": "MDQ6VXNlcjMxMTQ4Mzk3",
"organizations_url": "https://api.github.com/users/JiazhaoLi/orgs",
"received_events_url": "https://api.github.com/users/JiazhaoLi/received_events",
"repos_url": "https://api.github.com/users/JiazhaoLi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JiazhaoLi",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n"
] |
2023-06-14T17:59:35Z
|
2023-06-14T18:04:16Z
|
2023-06-14T18:04:16Z
|
NONE
| null | null | null | null |
### Describe the bug
Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub.
I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets.
My problem is about the load_metric.
When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns
` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper
return deprecated_function(*args, **kwargs)
File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable`
Thanks in advance for help!
### Steps to reproduce the bug
N/A
### Expected behavior
N/A
### Environment info
`datasets == 2.12.0`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4",
"events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}",
"followers_url": "https://api.github.com/users/JiazhaoLi/followers",
"following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}",
"gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JiazhaoLi",
"id": 31148397,
"login": "JiazhaoLi",
"node_id": "MDQ6VXNlcjMxMTQ4Mzk3",
"organizations_url": "https://api.github.com/users/JiazhaoLi/orgs",
"received_events_url": "https://api.github.com/users/JiazhaoLi/received_events",
"repos_url": "https://api.github.com/users/JiazhaoLi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JiazhaoLi",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5959/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5959/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5955
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5955/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5955/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5955/events
|
https://github.com/huggingface/datasets/issues/5955
| 1,756,827,133
|
I_kwDODunzps5otw39
| 5,955
|
Strange bug in loading local JSON files, using load_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73934131?v=4",
"events_url": "https://api.github.com/users/Night-Quiet/events{/privacy}",
"followers_url": "https://api.github.com/users/Night-Quiet/followers",
"following_url": "https://api.github.com/users/Night-Quiet/following{/other_user}",
"gists_url": "https://api.github.com/users/Night-Quiet/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night-Quiet",
"id": 73934131,
"login": "Night-Quiet",
"node_id": "MDQ6VXNlcjczOTM0MTMx",
"organizations_url": "https://api.github.com/users/Night-Quiet/orgs",
"received_events_url": "https://api.github.com/users/Night-Quiet/received_events",
"repos_url": "https://api.github.com/users/Night-Quiet/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night-Quiet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night-Quiet/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night-Quiet",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This is the actual error:\r\n```\r\nFailed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values\r\n```\r\nWhich means some samples are incorrectly formatted.\r\n\r\nPyArrow, a storage backend that we use under the hood, requires that all the list elements have the same level of nesting (same number of dimensions) or are `None`.\r\n```python\r\nimport pyarrow as pa\r\npa.array([[1, 2, 3], 2]) # ArrowInvalid: cannot mix list and non-list, non-null values\r\npa.array([[1, 2, 3], [2]]) # works\r\n``` ",
"@mariosasko \r\nI used the same operation to check the original data before and after slicing.\r\nThis is reflected in my code.\r\n160000 is not a specific number.\r\nI can also get output using 150000.\r\nThis doesn't seem to align very well with what you said.\r\nBecause if only some sample formats are incorrect.\r\nSo there should be an error in one of the front and back slices.\r\nthank you for your reply.",
"Our JSON loader does the following in your case:\r\n\r\n```python\r\nimport json\r\nimport pyarrow as pa\r\n\r\nwith open(file, encoding=\"utf-8\") as f:\r\n dataset = json.load(f)\r\nkeys = set().union(*[row.keys() for row in dataset])\r\nmapping = {col: [row.get(col) for row in dataset] for col in keys}\r\npa_table = pa.Table.from_pydict(mapping) # the ArrowInvalid error comes from here\r\n```\r\n\r\nSo if this code throws an error with correctly-formatted JSON, then this is an Arrow bug and should be reported in their repo.\r\n\r\n> I used the same operation to check the original data before and after slicing.\r\nThis is reflected in my code.\r\n160000 is not a specific number.\r\nI can also get output using 150000.\r\nThis doesn't seem to align very well with what you said.\r\nBecause if only some sample formats are incorrect.\r\nSo there should be an error in one of the front and back slices.\r\n\r\nYou should shuffle the data to make sure that's not the case",
"@mariosasko \r\nThank you.\r\nI will try again."
] |
2023-06-14T12:46:00Z
|
2023-06-21T14:42:15Z
|
2023-06-21T14:42:15Z
|
NONE
| null | null | null | null |
### Describe the bug
I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error.
The data is a list containing a dictionary. As follows:
[
{'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]},
...
]
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
path = "target.json"
temp_path = "temp.json"
with open(path, "r") as f:
data = json.load(f)
print(f"\n-------the JSON file length is: {len(data)}-------\n")
with open(temp_path, "w") as f:
json.dump(data[:160000], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works when the JSON file length is 160000-------\n")
with open(temp_path, "w") as f:
json.dump(data[160000:], f)
dataset = load_dataset("json", data_files=temp_path)
print("\n-------This works and eliminates data issues-------\n")
with open(temp_path, "w") as f:
json.dump(data[:170000], f)
dataset = load_dataset("json", data_files=temp_path)
```
### Expected behavior
```
-------the JSON file length is: 173049-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|โโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 3328.81it/s]
Extracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 639.47it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 265.85it/s]
-------This works when the JSON file length is 160000-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|โโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 2038.05it/s]
Extracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 794.83it/s]
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 681.00it/s]
-------This works and eliminates data issues-------
Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-63f391c89599c7b0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 100%|โโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 3682.44it/s]
Extracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 788.70it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values
Traceback (most recent call last):
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at /home/lakala/hjc/code/pycode/glm/temp.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lakala/hjc/code/pycode/glm/test.py", line 22, in <module>
dataset = load_dataset("json", data_files=temp_path)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
```
Ubuntu==22.04
python==3.8
pytorch-transformers==1.2.0
transformers== 4.27.1
datasets==2.12.0
numpy==1.24.3
pandas==1.5.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5955/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5955/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5953
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5953/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5953/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5953/events
|
https://github.com/huggingface/datasets/issues/5953
| 1,756,520,523
|
I_kwDODunzps5osmBL
| 5,953
|
Bad error message when trying to download gated dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc @sanchit-gandhi @Vaibhavs10 @lhoestq - this is mainly for demos that use Common Voice datasets as done here: https://github.com/facebookresearch/fairseq/tree/main/examples/mms#-transformers\r\n",
"Hi ! the error for me is\r\n\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/mozilla-foundation/common_voice_13_0/common_voice_13_0.py or any data file in the same directory. Couldn't find 'mozilla-foundation/common_voice_13_0' on the Hugging Face Hub either: FileNotFoundError: Dataset 'mozilla-foundation/common_voice_13_0' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\nAnd tbh idk how you managed to get your error. \"n_shards.json\" is not even a thing in `datasets`",
"Okay, I am able to reproduce @patrickvonplaten's original error: https://github.com/Vaibhavs10/scratchpad/blob/main/cv13_datasets_test.ipynb\r\n\r\nAlso not sure why it looks for `n_shards.json`",
"Ok I see, this file is downloaded from the CV dataset script - let me investigate",
"Ok I see: when you log out you no longer have access to the repository.\r\n\r\nTherefore the dataset script is loaded from cache:\r\n```\r\nWARNING:datasets.load:Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_13_0/22809012aac1fc9803eaffc44122e4149043748e93933935d5ea19898587e4d7 (last modified on Wed Jun 14 10:13:17 2023) since it couldn't be found locally at mozilla-foundation/common_voice_13_0., or remotely on the Hugging Face Hub.\r\n```\r\n\r\nand the script tries to download the n_shards.json but fails",
"Is this ok for you https://github.com/huggingface/datasets/pull/5954 ?\r\n\r\nI'll do a release this afternoon",
"Cool! ",
"this is included in the new release 2.13.0"
] |
2023-06-14T10:03:39Z
|
2023-06-14T16:36:51Z
|
2023-06-14T12:26:32Z
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.:
E.g.
```sh
Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password..
Will try to load from local cache.
```
If I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO:
```sh
File ~/hf/lib/python3.10/site-packages/fsspec/implementations/http.py:430, in HTTPFileSystem._info(self, url, **kwargs)
427 except Exception as exc:
428 if policy == "get":
429 # If get failed, then raise a FileNotFoundError
--> 430 raise FileNotFoundError(url) from exc
431 logger.debug(str(exc))
433 return {"name": url, "size": None, **info, "type": "file"}
FileNotFoundError: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0/resolve/main/n_shards.json
```
### Steps to reproduce the bug
```
huggingface-cli logout
```
and then:
```py
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Swahili
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "sw", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
sw_sample = next(iter(stream_data))["audio"]["array"]
```
### Expected behavior
Better error message
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.12.0
- Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.0.dev0
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5953/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5953/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5951
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5951/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5951/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5951/events
|
https://github.com/huggingface/datasets/issues/5951
| 1,756,363,546
|
I_kwDODunzps5or_sa
| 5,951
|
What is the Right way to use discofuse dataset??
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akesh1235",
"id": 125154243,
"login": "akesh1235",
"node_id": "U_kgDOB3Wzww",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akesh1235",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for opening https://huggingface.co/datasets/discofuse/discussions/3, let's continue the discussion over there if you don't mind",
"I have posted there also sir, please check\r\n@lhoestq"
] |
2023-06-14T08:38:39Z
|
2023-06-14T13:25:06Z
|
2023-06-14T12:10:16Z
|
NONE
| null | null | null | null |
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
1. **coherent_first_sentence**
2. **coherent_second_sentence**
3. **incoherent_first_sentence**
4. **incoherent_second_sentence**
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**
The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5951/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5951/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5950
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5950/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5950/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5950/events
|
https://github.com/huggingface/datasets/issues/5950
| 1,755,197,946
|
I_kwDODunzps5onjH6
| 5,950
|
Support for data with instance-wise dictionary as features
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33274336?v=4",
"events_url": "https://api.github.com/users/richardwth/events{/privacy}",
"followers_url": "https://api.github.com/users/richardwth/followers",
"following_url": "https://api.github.com/users/richardwth/following{/other_user}",
"gists_url": "https://api.github.com/users/richardwth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richardwth",
"id": 33274336,
"login": "richardwth",
"node_id": "MDQ6VXNlcjMzMjc0MzM2",
"organizations_url": "https://api.github.com/users/richardwth/orgs",
"received_events_url": "https://api.github.com/users/richardwth/received_events",
"repos_url": "https://api.github.com/users/richardwth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richardwth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardwth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richardwth",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! We use the Arrow columnar format under the hood, which doesn't support such dictionaries: each field must have a fixed type and exist in each sample.\r\n\r\nInstead you can restructure your data like\r\n```\r\n{\r\n \"index\": 0,\r\n \"keys\": [\"2 * x + y >= 3\"],\r\n \"values\": [[\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"]],\r\n }\r\n},\r\n...\r\n{\r\n \"index\": 9999,\r\n \"keys\": [\"x >= 6\"],\r\n \"values\": [[\"x >= 6\", \"x >= 0\", \"x >= -1\"]],\r\n},\r\n...\r\n```",
"Maybe there could be some type of automated conversion from dicts to tuples. I am also trying to wrangle a json-based dataset into `datasets` and it's awful because of this issue.",
"Alternatively we can maybe support the [Json extension type](https://arrow.apache.org/docs/python/generated/pyarrow.JsonType.html#pyarrow.JsonType) in `pyarrow` ?\n\nbtw `datasets` is open to contributions on this subject if you'd like to take a look",
"Hmm, I'll think about this a bit.\n\nhttps://arrow.apache.org/docs/python/json.html\n\n> Nested JSON objects convert to a struct type, and inference proceeds recursively on the JSON objectsโ values.\n\nhttps://arrow.apache.org/docs/dev/python/generated/pyarrow.JsonType.html\n\nHmm... AFAICT from reading the docs, the `JsonType` seems like a string that is annotated as JSON. So when you read it, it's literally just the encoded JSON. So that's not ideal.\n\nI guess there are conceptually two components to using this:\n1. Modifying schema inference to use JsonType when \"appropriate\". More on that below.\n2. Handling JsonType when reading. I guess we would want to call `json.loads` for the user on any JsonType columns?\n\n### Schema inference\n\nI can think of a few ways forward:\n1. Add a JSON builder option that converts nested JSON objects to JsonTypes instead.\n2. Use some heuristic to detect when a struct type is bad (e.g., average number of `None` values) and convert those to JsonTypes instead.\n\nThe first option is the easiest, but also would remove the nested structure from the arrow schema. Does this matter?",
"The first option sounds good indeed, and more explicit / flexible.\n\n> but also would remove the nested structure from the arrow schema. Does this matter?\n\nWell I expect some `pyarrow.compute` functions for nested data to not work for JsonType, and maybe exporting to a pandas / polars dataframe can have a few issues. But it's ok as a first step and we can iterate imo",
"So this will be harder than I thought. I thought pyarrow would provide a separate type inference function, but it seems like it doesn't. Type inference is wrapped into the conversion/loading functions, which are, of course, failing.",
"Since we use pandas to load the JSON before converting to arrow, maybe we can do some conversion there.\n\n```\nIn [29]: df = pd.read_json(\"/tmp/wtf.json\")\n\nIn [30]: df\nOut[30]: \n lol\n0 [42, []]\n\nIn [31]: pa.Table.from_pandas(df)\n---------------------------------------------------------------------------\nArrowInvalid Traceback (most recent call last)\n```\n\nBut if we convert the problem column to a string, we can proceed:\n\n```\nIn [38]: df['lol'] = df['lol'].astype(str)\n\nIn [39]: df\nOut[39]: \n lol\n0 [42, []]\n\nIn [40]: pa.Table.from_pandas(df)\nOut[40]: \npyarrow.Table\nlol: string\n----\nlol: [[\"[42, []]\"]]\n```\n\nSo I guess we could coerce columns to strings until we're able to convert to arrow, and then convert those coerced columns to JsonType in the final arrow. This feels kind of icky to me though. But I think it might work.",
"Makes sense, yes it's maybe the way to go to have it working short term.\n\nLonger term `pyarrow` should handle it though IMO via its JSON reader, have you opened an issue there already by any chance ?",
"> Longer term pyarrow should handle it though IMO via its JSON reader, have you opened an issue there already by any chance ?\n\nI agree that `pyarrow` changes are a better solution long term. I haven't opened any issues yet. Were you thinking:\n \n* To separate the inference function\n* Open an issue with an incompatible JSON file and see what they suggest?",
"I was thinking of seeing what they suggest in case of incompatible JSON, it's also possible that other members of the community have asked for for help / requested such a feature already. But sharing about the inference function idea can be interesting as well",
"Thanks, I will do this and see what they suggest.\r\n\r\nOn Mon, Apr 7, 2025, 9:18โฏAM Quentin Lhoest - ***@***.***\r\n***@***.***> wrote:\r\n\r\n> I was thinking of seeing what they suggest in case of incompatible JSON,\r\n> it's also possible that other members of the community have asked for for\r\n> help / requested such a feature already. But sharing about the inference\r\n> function idea can be interesting as well\r\n>\r\n> โ\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5950#issuecomment-2783312654>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZKUPOK5IBQBMMVDCTT2YJ3LXAVCNFSM6AAAAAB2H43DQKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBTGMYTENRVGQ>\r\n> .\r\n> You are receiving this because you are subscribed to this thread.Message\r\n> ID: ***@***.***>\r\n> [image: lhoestq]*lhoestq* left a comment (huggingface/datasets#5950)\r\n> <https://github.com/huggingface/datasets/issues/5950#issuecomment-2783312654>\r\n>\r\n> I was thinking of seeing what they suggest in case of incompatible JSON,\r\n> it's also possible that other members of the community have asked for for\r\n> help / requested such a feature already. But sharing about the inference\r\n> function idea can be interesting as well\r\n>\r\n> โ\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5950#issuecomment-2783312654>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAHYKZKUPOK5IBQBMMVDCTT2YJ3LXAVCNFSM6AAAAAB2H43DQKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOOBTGMYTENRVGQ>\r\n> .\r\n> You are receiving this because you are subscribed to this thread.Message\r\n> ID: ***@***.***>\r\n>\r\n"
] |
2023-06-13T15:49:00Z
|
2025-04-07T13:20:37Z
| null |
NONE
| null | null | null | null |
### Feature request
I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.
It is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized).
### Motivation
I am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
}
},
...
{
"index": 9999,
"feature": {
"x >= 6": ["x >= 6", "x >= 0", "x >= -1"],
...
}
},
...
```
When directly loading the dataset using `data = load_dataset("json", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
"x >= 6": None, # keys from other instances
...
}
},
```
This is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys.
A solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets.
### Your contribution
N/A
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5950/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5950/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5947
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5947/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5947/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5947/events
|
https://github.com/huggingface/datasets/issues/5947
| 1,754,359,316
|
I_kwDODunzps5okWYU
| 5,947
|
Return the audio filename when decoding fails due to corrupt files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8949105?v=4",
"events_url": "https://api.github.com/users/wetdog/events{/privacy}",
"followers_url": "https://api.github.com/users/wetdog/followers",
"following_url": "https://api.github.com/users/wetdog/following{/other_user}",
"gists_url": "https://api.github.com/users/wetdog/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wetdog",
"id": 8949105,
"login": "wetdog",
"node_id": "MDQ6VXNlcjg5NDkxMDU=",
"organizations_url": "https://api.github.com/users/wetdog/orgs",
"received_events_url": "https://api.github.com/users/wetdog/received_events",
"repos_url": "https://api.github.com/users/wetdog/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wetdog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wetdog/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wetdog",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! The audio data don't always exist as files on disk - the blobs are often stored in the Arrow files. For now I'd suggest disabling decoding with `.cast_column(\"audio\", Audio(decode=False))` and apply your own decoding that handles corrupted files (maybe to filter them out ?)\r\n\r\ncc @sanchit-gandhi since it's related to our discussion about allowing users to make decoding return `None` and show a warning when there are corrupted files",
"Thanks @lhoestq, I wasn't aware of the decode flag. It makes more sense as you say to show a warning when there are corrupted files together with some metadata of the file that allows to filter them from the dataset.\r\n\r\nMy workaround was to catch the LibsndfileError and generate a dummy audio with an unsual sample rate to filter it later. However returning `None` seems better. \r\n\r\n`try:\r\n array, sampling_rate = sf.read(file)\r\nexcept sf.LibsndfileError:\r\n print(\"bad file\")\r\n array = np.array([0.0])\r\n sampling_rate = 99.000` \r\n\r\n"
] |
2023-06-13T08:44:09Z
|
2023-06-14T12:45:01Z
| null |
NONE
| null | null | null | null |
### Feature request
Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file.
### Motivation
When you try to load an object file dataset and the decoding fails you can't know which file is corrupt
```
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised.
```
### Your contribution
Make a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails.
| null |
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5947/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5947/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5946
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5946/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5946/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5946/events
|
https://github.com/huggingface/datasets/issues/5946
| 1,754,234,469
|
I_kwDODunzps5oj35l
| 5,946
|
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/70565543?v=4",
"events_url": "https://api.github.com/users/syngokhan/events{/privacy}",
"followers_url": "https://api.github.com/users/syngokhan/followers",
"following_url": "https://api.github.com/users/syngokhan/following{/other_user}",
"gists_url": "https://api.github.com/users/syngokhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/syngokhan",
"id": 70565543,
"login": "syngokhan",
"node_id": "MDQ6VXNlcjcwNTY1NTQz",
"organizations_url": "https://api.github.com/users/syngokhan/orgs",
"received_events_url": "https://api.github.com/users/syngokhan/received_events",
"repos_url": "https://api.github.com/users/syngokhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/syngokhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syngokhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/syngokhan",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"https://colab.research.google.com/#scrollTo=AQ_HCYruWIHU&fileId=https%3A//huggingface.co/dfurman/falcon-40b-chat-oasst1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb\r\n\r\nI ran the same administration exactly the same but got the same error",
"Looks related to https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/4?u=lhoestq",
"> Looks related to https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/4?u=lhoestq\n\nThe problem has not been solved, I have tried this before, but the problem is the same",
"> \r\n\r\n@syngokhan did u solve it? \r\nI am desperate ",
"data = data[\"train\"].shuffle().map(generate_and_tokenize_prompt, batched = False) # change this line to -\r\n\r\ndata[\"train\"] = data[\"train\"].shuffle().map(generate_and_tokenize_prompt, batched = False)\r\nAfter doing this change you code should run fine.",
"> > \r\n> \r\n> @syngokhan did u solve it? I am desperate\r\n\r\nrefer to my earlier comment. you will find the solution."
] |
2023-06-13T07:34:15Z
|
2023-07-14T12:04:48Z
| null |
NONE
| null | null | null | null |
### Describe the bug
in <cell line: 1>:1 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train โ
โ โ
โ 1534 โ โ inner_training_loop = find_executable_batch_size( โ
โ 1535 โ โ โ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size โ
โ 1536 โ โ ) โ
โ โฑ 1537 โ โ return inner_training_loop( โ
โ 1538 โ โ โ args=args, โ
โ 1539 โ โ โ resume_from_checkpoint=resume_from_checkpoint, โ
โ 1540 โ โ โ trial=trial, โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop โ
โ โ
โ 1786 โ โ โ โ rng_to_sync = True โ
โ 1787 โ โ โ โ
โ 1788 โ โ โ step = -1 โ
โ โฑ 1789 โ โ โ for step, inputs in enumerate(epoch_iterator): โ
โ 1790 โ โ โ โ total_batched_samples += 1 โ
โ 1791 โ โ โ โ if rng_to_sync: โ
โ 1792 โ โ โ โ โ self._load_rng_state(resume_from_checkpoint) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ โ
โ โ
โ 374 โ โ dataloader_iter = super().__iter__() โ
โ 375 โ โ # We iterate one batch ahead to check when we are at the end โ
โ 376 โ โ try: โ
โ โฑ 377 โ โ โ current_batch = next(dataloader_iter) โ
โ 378 โ โ except StopIteration: โ
โ 379 โ โ โ yield โ
โ 380 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ โ
โ โ
โ 630 โ โ โ if self._sampler_iter is None: โ
โ 631 โ โ โ โ # TODO(https://github.com/pytorch/pytorch/issues/76750) โ
โ 632 โ โ โ โ self._reset() # type: ignore[call-arg] โ
โ โฑ 633 โ โ โ data = self._next_data() โ
โ 634 โ โ โ self._num_yielded += 1 โ
โ 635 โ โ โ if self._dataset_kind == _DatasetKind.Iterable and \ โ
โ 636 โ โ โ โ โ self._IterableDataset_len_called is not None and \ โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data โ
โ โ
โ 674 โ โ
โ 675 โ def _next_data(self): โ
โ 676 โ โ index = self._next_index() # may raise StopIteration โ
โ โฑ 677 โ โ data = self._dataset_fetcher.fetch(index) # may raise StopIteration โ
โ 678 โ โ if self._pin_memory: โ
โ 679 โ โ โ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) โ
โ 680 โ โ return data โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch โ
โ โ
โ 46 โ def fetch(self, possibly_batched_index): โ
โ 47 โ โ if self.auto_collation: โ
โ 48 โ โ โ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: โ
โ โฑ 49 โ โ โ โ data = self.dataset.__getitems__(possibly_batched_index) โ
โ 50 โ โ โ else: โ
โ 51 โ โ โ โ data = [self.dataset[idx] for idx in possibly_batched_index] โ
โ 52 โ โ else: โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ โ
โ โ
โ 2779 โ โ
โ 2780 โ def __getitems__(self, keys: List) -> List: โ
โ 2781 โ โ """Can be used to get a batch using a list of integers indices.""" โ
โ โฑ 2782 โ โ batch = self.__getitem__(keys) โ
โ 2783 โ โ n_examples = len(batch[next(iter(batch))]) โ
โ 2784 โ โ return [{col: array[i] for col, array in batch.items()} for i in range(n_example โ
โ 2785 โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ โ
โ โ
โ 2775 โ โ
โ 2776 โ def __getitem__(self, key): # noqa: F811 โ
โ 2777 โ โ """Can be used to index columns (by string names) or rows (by integer index or i โ
โ โฑ 2778 โ โ return self._getitem(key) โ
โ 2779 โ โ
โ 2780 โ def __getitems__(self, keys: List) -> List: โ
โ 2781 โ โ """Can be used to get a batch using a list of integers indices.""" โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem โ
โ โ
โ 2759 โ โ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ โ
โ 2760 โ โ format_kwargs = format_kwargs if format_kwargs is not None else {} โ
โ 2761 โ โ formatter = get_formatter(format_type, features=self._info.features, **format_kw โ
โ โฑ 2762 โ โ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice โ
โ 2763 โ โ formatted_output = format_table( โ
โ 2764 โ โ โ pa_subtable, key, formatter=formatter, format_columns=format_columns, output โ
โ 2765 โ โ ) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table โ
โ โ
โ 575 โ โ _check_valid_column_key(key, table.column_names) โ
โ 576 โ else: โ
โ 577 โ โ size = indices.num_rows if indices is not None else table.num_rows โ
โ โฑ 578 โ โ _check_valid_index_key(key, size) โ
โ 579 โ # Query the main table โ
โ 580 โ if indices is None: โ
โ 581 โ โ pa_subtable = _query_table(table, key) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in โ
โ _check_valid_index_key โ
โ โ
โ 528 โ โ โ _check_valid_index_key(min(key), size=size) โ
โ 529 โ elif isinstance(key, Iterable): โ
โ 530 โ โ if len(key) > 0: โ
โ โฑ 531 โ โ โ _check_valid_index_key(int(max(key)), size=size) โ
โ 532 โ โ โ _check_valid_index_key(int(min(key)), size=size) โ
โ 533 โ else: โ
โ 534 โ โ _raise_bad_key_type(key) โ
โ โ
โ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in โ
โ _check_valid_index_key โ
โ โ
โ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: โ
โ 519 โ if isinstance(key, int): โ
โ 520 โ โ if (key < 0 and key + size < 0) or (key >= size): โ
โ โฑ 521 โ โ โ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") โ
โ 522 โ โ return โ
โ 523 โ elif isinstance(key, slice): โ
โ 524 โ โ pass
### Steps to reproduce the bug
``
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
MODEL_NAME = "tiiuae/falcon-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit = True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map = "auto",
trust_remote_code = True,
quantization_config = bnb_config
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
config = LoraConfig(
r = 16,
lora_alpha = 32,
target_modules = ["query_key_value"],
lora_dropout = 0.05,
bias = "none",
task_type = "CASUAL_LM"
)
model = get_peft_model(model,config)
print_trainable_parameters(model)
def generate_prompt(data_point):
return f"""
<human>: {data_point["question"]}
<assistant>: {data_point["answer"]}
""".strip()
def generate_and_tokenize_prompt(data_point):
full_prompt = generate_prompt(data_point)
tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None)
return dict({
"input_ids" : tokenized_full_prompt["input_ids"],
"attention_mask" : tokenized_full_prompt["attention_mask"]
})
data = data["train"].shuffle().map(generate_and_tokenize_prompt, batched = False)
OUTPUT_DIR = "experiments"
trainings_args = transformers.TrainingArguments(
per_device_train_batch_size = 1,
gradient_accumulation_steps = 4,
num_train_epochs = 1,
learning_rate = 2e-4,
fp16 = True,
save_total_limit = 3,
logging_steps = 1,
output_dir = OUTPUT_DIR,
max_steps = 80,
optim = "paged_adamw_8bit",
lr_scheduler_type = "cosine",
warmup_ratio = 0.05,
#remove_unused_columns=True
)
trainer = transformers.Trainer(
model = model,
train_dataset = data,
args = trainings_args,
data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False
trainer.train()
IndexError: Invalid key: 32 is out of bounds for size 0
DataSet Format is like :
[{"question": "How can I create an account?", "answer": "To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process."}, .... ]
### Expected behavior
-
### Environment info
!pip install -q pip
!pip install -q bitsandbytes==0.39.0
!pip install -q torch==2.0.1
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q git+https://github.com/huggingface/peft.git
!pip install -q git+https://github.com/huggingface/accelerate.git
!pip install -q datasets
!pip install -q loralib==0.1.1
!pip install -q einops==0.6.1
import json
import os
from pprint import pprint
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import Dataset,load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5946/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5946/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5945/events
|
https://github.com/huggingface/datasets/issues/5945
| 1,754,084,577
|
I_kwDODunzps5ojTTh
| 5,945
|
Failing to upload dataset to the hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/77382661?v=4",
"events_url": "https://api.github.com/users/Ar770/events{/privacy}",
"followers_url": "https://api.github.com/users/Ar770/followers",
"following_url": "https://api.github.com/users/Ar770/following{/other_user}",
"gists_url": "https://api.github.com/users/Ar770/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ar770",
"id": 77382661,
"login": "Ar770",
"node_id": "MDQ6VXNlcjc3MzgyNjYx",
"organizations_url": "https://api.github.com/users/Ar770/orgs",
"received_events_url": "https://api.github.com/users/Ar770/received_events",
"repos_url": "https://api.github.com/users/Ar770/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ar770/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ar770/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ar770",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Feel free to re-run your code later, it will resume automatically where you left",
"Tried many times in the last 2 weeks, problem remains.",
"Alternatively you can save your dataset in parquet files locally and upload them to the hub manually\r\n\r\n```python\r\nfrom tqdm import tqdm\r\nnum_shards = 60\r\nfor index in tqdm(range(num_shards)):\r\n ds.shard(num_shards=num_shards, index=index, contiguous=True).to_parquet(f\"{index:05d}.parquet\")\r\n````"
] |
2023-06-13T05:46:46Z
|
2023-07-24T11:56:40Z
|
2023-07-24T11:56:40Z
|
NONE
| null | null | null | null |
### Describe the bug
Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work.
From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable.
Please help.
I'm trying to upload the dataset for almost a week.
Thanks
### Steps to reproduce the bug
not relevant
### Expected behavior
Be able to upload thedataset
### Environment info
python: 3.9
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5945/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5945/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5941
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5941/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5941/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5941/events
|
https://github.com/huggingface/datasets/issues/5941
| 1,751,838,897
|
I_kwDODunzps5oavCx
| 5,941
|
Load Data Sets Too Slow In Train Seq2seq Model
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xyx361100238",
"id": 19569322,
"login": "xyx361100238",
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xyx361100238",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! you can speed it up using multiprocessing by passing `num_proc=` to `load_dataset()`",
"already did๏ผbut not useful for step Generating train split๏ผit works in step \"Resolving data files\" & \"Downloading data files\" ",
"@mariosasko some advice ๏ผ thanks๏ผ",
"I met the same problem, terrible experience",
"@mariosasko ",
"We need more info about the issue to provide help. \r\n\r\nCan you interrupt the process (with `num_proc=None`) after the `load_dataset` call when the slowdown occurs? So we can know what part of the code is causing it.\r\n\r\nThe `audiofolder` \\ `imagefolder` with metadata is not performant for large datasets. Luckily, we can make them much faster if drop the nested metadata files feature (not that useful). I plan to work on this soon.\r\n\r\nIn the meantime, it's better to use `Dataset.from_generator` (requires replacing the `load_dataset` calls in the transformers script with `Dataset.from_generator`) or write a dataset loading script for large datasets.",
"Can you interrupt the process (with num_proc=None) after the load_dataset call when the slowdown occurs? So we can know what part of the code is causing it.\r\n๏ผI'll try this operation๏ผ\r\nThe audiofolder \\ imagefolder with metadata is not performant for large datasets. Luckily, we can make them much faster if drop the nested metadata files feature (not that useful). I plan to work on this soon.\r\n(My data is indeed a bit large, exceeding 10000 hours of audio data. Looking forward to your improvement work very much)\r\n\r\nIn the meantime, it's better to use Dataset.from_generator (requires replacing the load_dataset calls in the transformers script with Dataset.from_generator) or write a dataset loading script for large datasets.\r\n๏ผI want to use Dataset.from_generator instead of load_dataset ๏ผwhere can i found sample code to load audio&label dataset๏ผ I was to do asr task๏ผ",
"Can you interrupt the process (with num_proc=None) after the load_dataset call when the slowdown occurs? So we can know what part of the code is causing it.\r\n================================================================================\r\nHere is the log๏ผ\r\n[load_dataset.log](https://github.com/huggingface/datasets/files/12169362/load_dataset.log)\r\n๏ผThe larger my training data, the slower it loads๏ผ\r\n\r\n\r\n",
"In the meantime, it's better to use Dataset.from_generator (requires replacing the load_dataset calls in the transformers script with Dataset.from_generator) or write a dataset loading script for large datasets.\r\n================================================================================\r\nI tried โDataset. from_generatorโ implements data loading, but the testing results show no improvement",
"I have already solved this problem, referring to #5990 : read audio frist, then use data_generator to change format ."
] |
2023-06-12T03:58:43Z
|
2023-08-15T02:52:22Z
|
2023-08-15T02:52:22Z
|
NONE
| null | null | null | null |
### Describe the bug
step 'Generating train split' in load_dataset is too slow๏ผ

### Steps to reproduce the bug
Data๏ผ own data๏ผ16K16B Mono wav
Oficial Script:[ run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py)
Add Code๏ผ
if data_args.data_path is not None:
print(data_args.data_path)
raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir)
raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000))
raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True)
๏ผchange cache_dir to other path ๏ผex:/DATA/cache๏ผ
### Expected behavior
load data fast,at least 1000+
`Generating train split: 387875 examples [32:24:45, 1154.83 examples/s]`
### Environment info
- `transformers` version: 4.28.0.dev0
- Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xyx361100238",
"id": 19569322,
"login": "xyx361100238",
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xyx361100238",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5941/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5941/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5990
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5990/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5990/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5990/events
|
https://github.com/huggingface/datasets/issues/5990
| 1,774,389,854
|
I_kwDODunzps5pwwpe
| 5,990
|
Pushing a large dataset on the hub consistently hangs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AntreasAntoniou",
"id": 10792502,
"login": "AntreasAntoniou",
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AntreasAntoniou",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi @AntreasAntoniou , sorry to know you are facing this issue. To help debugging it, could you tell me:\r\n- What is the total dataset size?\r\n- Is it always failing on the same shard or is the hanging problem happening randomly?\r\n- Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nI'm cc-ing @lhoestq who might have some insights from a `datasets` perspective.",
"One trick that can also help is to check the traceback when you kill your python process: it will show where in the code it was hanging",
"Right. So I did the trick @lhoestq suggested. Here is where things seem to hang\r\n\r\n```\r\nError while uploading 'data/train-00120-of-00195-466c2dbab2eb9989.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:03<00:00, 1.15s/ba]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:52<00:00, 52.12s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:03<00:00, 1.08s/ba]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:45<00:00, 45.54s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:03<00:00, 1.08s/ba]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:03<00:00, 1.03s/ba^Upload 1 LFS files: 0%| | 0/1 [\r\n21:27:35<?, ?it/s] \r\nPushing dataset shards to the dataset hub: 63%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 122/195 [23:37:11<14:07:59, 696.98s/it]\r\n^CError in sys.excepthook: \r\nTraceback (most recent call last): \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1699, in print \r\n extend(render(renderable, render_options)) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1335, in render \r\n yield from self.render(render_output, _options) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/constrain.py\", line 29, in __rich_console__ \r\n yield from console.render(self.renderable, child_options) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/panel.py\", line 220, in __rich_console__ \r\n lines = console.render_lines(renderable, child_options, style=style) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1371, in render_lines \r\n lines = list( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/segment.py\", line 292, in split_and_crop_lines \r\n for segment in segments: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/padding.py\", line 97, in __rich_console__ \r\n lines = console.render_lines( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1371, in render_lines \r\n lines = list( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/segment.py\", line 292, in split_and_crop_lines \r\n for segment in segments: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1335, in render \r\n yield from self.render(render_output, _options) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/console.py\", line 1331, in render \r\n for render_output in iter_render: \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/syntax.py\", line 611, in __rich_console__ \r\n segments = Segments(self._get_syntax(console, options)) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/segment.py\", line 668, in __init__ \r\n self.segments = list(segments) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/syntax.py\", line 674, in _get_syntax \r\n lines: Union[List[Text], Lines] = text.split(\"\\n\", allow_blank=ends_on_nl) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/text.py\", line 1042, in split \r\n lines = Lines( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/containers.py\", line 70, in __init__ \r\n self._lines: List[\"Text\"] = list(lines) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/text.py\", line 1043, in <genexpr> \r\n line for line in self.divide(flatten_spans()) if line.plain != separator \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/rich/text.py\", line 385, in plain \r\n if len(self._text) != 1: \r\nKeyboardInterrupt \r\n \r\nOriginal exception was: \r\nTraceback (most recent call last): \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map \r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/std.py\", line 1178, in __iter__ \r\n for obj in iterable: \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 621, in result_iterator \r\n yield _result_or_cancel(fs.pop()) \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 319, in _result_or_cancel \r\n return fut.result(timeout) \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 453, in result \r\n self._condition.wait(timeout) \r\n File \"/opt/conda/envs/main/lib/python3.10/threading.py\", line 320, in wait \r\n waiter.acquire() \r\nKeyboardInterrupt \r\n \r\nDuring handling of the above exception, another exception occurred: \r\n \r\nTraceback (most recent call last): \r\n File \"/TALI/tali/scripts/validate_dataset.py\", line 127, in <module> \r\n train_dataset.push_to_hub(repo_id=\"Antreas/TALI-base\", max_shard_size=\"5GB\") \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/datasets/dataset_dict.py\", line 1583, in push_to_hub \r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 5275, in _push_parquet_shards_to_hub \r\n _retry( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 282, in _retry \r\n return func(*func_args, **func_kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3205, in upload_file \r\n commit_info = self.create_commit( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 826, in _inner \r\n return fn(self, *args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 2680, in create_commit \r\n upload_lfs_files( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn \r\n return fn(*args, **kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/_commit_api.py\", line 353, in upload_lfs_files \r\n thread_map( \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map \r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) \r\n File \"/opt/conda/envs/main/lib/python3.10/site-packages/tqdm/contrib/concurrent.py\", line 49, in _executor_map \r\n with PoolExecutor(max_workers=max_workers, initializer=tqdm_class.set_lock, \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/_base.py\", line 649, in __exit__ \r\n self.shutdown(wait=True) \r\n File \"/opt/conda/envs/main/lib/python3.10/concurrent/futures/thread.py\", line 235, in shutdown \r\n t.join() \r\n File \"/opt/conda/envs/main/lib/python3.10/threading.py\", line 1096, in join \r\n self._wait_for_tstate_lock() \r\n File \"/opt/conda/envs/main/lib/python3.10/threading.py\", line 1116, in _wait_for_tstate_lock \r\n if lock.acquire(block, timeout): \r\nKeyboardInterrupt \r\n```",
"@Wauplin \r\n\r\n>What is the total dataset size?\r\n\r\nThere are three variants, and the random hanging happens on all three. The sizes are 2TB, 1TB, and 200GB. \r\n\r\n>Is it always failing on the same shard or is the hanging problem happening randomly?\r\n\r\nIt seems to be very much random, as restarting can help move past the previous hang, only to find a new one, or not. \r\n\r\n>Were you able to save the dataset as parquet locally? This would help us determine if the problem comes from the upload or the file generation.\r\n\r\nYes. The dataset seems to be locally stored as parquet. ",
"Hmm it looks like an issue with TQDM lock. Maybe you can try updating TQDM ?",
"I am using the latest version of tqdm\r\n\r\n```\r\nโฌข [Docker] โฏ pip install tqdm --upgrade\r\nRequirement already satisfied: tqdm in /opt/conda/envs/main/lib/python3.10/site-packages (4.65.0)\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\r\n```",
"I tried trying to catch the hanging issue in action again\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 65%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 127/195 [2:28:02<1:19:15, 69.94s/it] \r\nError while uploading 'data/train-00127-of-00195-3f8d036ade107c27.parquet' to the Hub. \r\nPushing split train to the Hub. \r\nPushing dataset shards to the dataset hub: 64%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 124/195 [2:06:10<1:12:14, 61.05s/it]C^[^C^C^C \r\nโญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ \r\nโ /TALI/tali/scripts/validate_dataset.py:127 in <module> โ \r\nโ โ \r\nโ 124 โ โ \r\nโ 125 โ while not succesful_competion: โ \r\nโ 126 โ โ try: โ \r\nโ โฑ 127 โ โ โ train_dataset.push_to_hub(repo_id=\"Antreas/TALI-base\", max_shard_size=\"5GB\") โ \r\nโ 128 โ โ โ succesful_competion = True โ \r\nโ 129 โ โ except Exception as e: โ \r\nโ 130 โ โ โ print(e) โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/dataset_dict.py:1583 in push_to_hub โ \r\nโ โ \r\nโ 1580 โ โ for split in self.keys(): โ \r\nโ 1581 โ โ โ logger.warning(f\"Pushing split {split} to the Hub.\") โ \r\nโ 1582 โ โ โ # The split=key needs to be removed before merging โ \r\nโ โฑ 1583 โ โ โ repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parq โ \r\nโ 1584 โ โ โ โ repo_id, โ \r\nโ 1585 โ โ โ โ split=split, โ \r\nโ 1586 โ โ โ โ private=private, โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:5263 in โ \r\nโ _push_parquet_shards_to_hub โ \r\nโ โ \r\nโ 5260 โ โ โ \r\nโ 5261 โ โ uploaded_size = 0 โ \r\nโ 5262 โ โ shards_path_in_repo = [] โ \r\nโ โฑ 5263 โ โ for index, shard in logging.tqdm( โ \r\nโ 5264 โ โ โ enumerate(itertools.chain([first_shard], shards_iter)), โ \r\nโ 5265 โ โ โ desc=\"Pushing dataset shards to the dataset hub\", โ \r\nโ 5266 โ โ โ total=num_shards, โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/tqdm/std.py:1178 in __iter__ โ \r\nโ โ \r\nโ 1175 โ โ time = self._time โ \r\nโ 1176 โ โ โ \r\nโ 1177 โ โ try: โ\r\nโ โฑ 1178 โ โ โ for obj in iterable: โ\r\nโ 1179 โ โ โ โ yield obj โ\r\nโ 1180 โ โ โ โ # Update and possibly print the progressbar. โ\r\nโ 1181 โ โ โ โ # Note: does not call self.update(1) for speed optimisation. โ\r\nโ โ\r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:5238 in โ\r\nโ shards_with_embedded_external_files โ\r\nโ โ\r\nโ 5235 โ โ โ โ for shard in shards: โ\r\nโ 5236 โ โ โ โ โ format = shard.format โ\r\nโ 5237 โ โ โ โ โ shard = shard.with_format(\"arrow\") โ\r\nโ โฑ 5238 โ โ โ โ โ shard = shard.map( โ\r\nโ 5239 โ โ โ โ โ โ embed_table_storage, โ\r\nโ 5240 โ โ โ โ โ โ batched=True, โ\r\nโ 5241 โ โ โ โ โ โ batch_size=1000, โ\r\nโ โ\r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:578 in wrapper โ\r\nโ โ\r\nโ 575 โ โ else: โ\r\nโ 576 โ โ โ self: \"Dataset\" = kwargs.pop(\"self\") โ\r\nโ 577 โ โ # apply actual function โ\r\nโ โฑ 578 โ โ out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) โ \r\nโ 579 โ โ datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou โ \r\nโ 580 โ โ for dataset in datasets: โ \r\nโ 581 โ โ โ # Remove task templates if a column mapping of the template is no longer val โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:543 in wrapper โ \r\nโ โ \r\nโ 540 โ โ โ \"output_all_columns\": self._output_all_columns, โ \r\nโ 541 โ โ } โ \r\nโ 542 โ โ # apply actual function โ \r\nโ โฑ 543 โ โ out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs) โ \r\nโ 544 โ โ datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [ou โ \r\nโ 545 โ โ # re-apply format to the output โ \r\nโ 546 โ โ for dataset in datasets: โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:3073 in map โ \r\nโ โ \r\nโ 3070 โ โ โ โ โ leave=False, โ \r\nโ 3071 โ โ โ โ โ desc=desc or \"Map\", โ \r\nโ 3072 โ โ โ โ ) as pbar: โ \r\nโ โฑ 3073 โ โ โ โ โ for rank, done, content in Dataset._map_single(**dataset_kwargs): โ \r\nโ 3074 โ โ โ โ โ โ if done: โ \r\nโ 3075 โ โ โ โ โ โ โ shards_done += 1 โ \r\nโ 3076 โ โ โ โ โ โ โ logger.debug(f\"Finished processing shard number {rank} of {n โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_dataset.py:3464 in _map_single โ \r\nโ โ \r\nโ 3461 โ โ โ โ โ โ โ โ buf_writer, writer, tmp_file = init_buffer_and_writer() โ \r\nโ 3462 โ โ โ โ โ โ โ โ stack.enter_context(writer) โ \r\nโ 3463 โ โ โ โ โ โ โ if isinstance(batch, pa.Table): โ \r\nโ โฑ 3464 โ โ โ โ โ โ โ โ writer.write_table(batch) โ \r\nโ 3465 โ โ โ โ โ โ โ else: โ \r\nโ 3466 โ โ โ โ โ โ โ โ writer.write_batch(batch) โ \r\nโ 3467 โ โ โ โ โ โ num_examples_progress_update += num_examples_in_batch โ \r\nโ โ \r\nโ /opt/conda/envs/main/lib/python3.10/site-packages/datasets/arrow_writer.py:567 in write_table โ \r\nโ โ \r\nโ 564 โ โ โ writer_batch_size = self.writer_batch_size โ \r\nโ 565 โ โ if self.pa_writer is None: โ \r\nโ 566 โ โ โ self._build_writer(inferred_schema=pa_table.schema) โ \r\nโ โฑ 567 โ โ pa_table = pa_table.combine_chunks() โ \r\nโ 568 โ โ pa_table = table_cast(pa_table, self._schema) โ \r\nโ 569 โ โ if self.embed_local_files: โ \r\nโ 570 โ โ โ pa_table = embed_table_storage(pa_table) โ \r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ \r\nKeyboardInterrupt \r\n```",
"I'm on my phone so can't help that much. What I'd advice to do is to [save_to_disk](https://huggingface.co/docs/datasets/package_reference/main_classes#save_to_disk) if it's not already done and then upload the files/folder to the Hub separately. You can find what you need in the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload). It might not help finding the exact issue for now but at least it can unblock you. ",
"In your last stacktrace it interrupted while embedding external content - in case your dataset in made of images or audio files that live on your disk. Is it the case ?",
"Yeah, the dataset has images, audio, video and text. ",
"It's maybe related to https://github.com/apache/arrow/issues/34455: are you using ArrayND features ?\r\n\r\nAlso what's your `pyarrow` version ? Could you try updating to >= 12.0.1 ?",
"I was using pyarrow == 12.0.0\r\n\r\nI am not explicitly using ArrayND features, unless the hub API automatically converts my files to such. ",
"I have now updated to pyarrow == 12.0.1 and retrying",
"You can also try to reduce the `max_shard_size` - Sometimes parquet has a hard time working with data bigger than 2GB",
"So, updating the pyarrow seems to help. It can still throw errors here and there but I can retry when that happens. It's better than hanging. \r\n\r\nHowever, I am a bit confused about something. I have uploaded my datasets, but while earlier I could see all three sets, now I can only see 1. What's going on? \r\nhttps://huggingface.co/datasets/Antreas/TALI-base\r\n\r\nI have seen this happen before as well, so I deleted and reuploaded, but this dataset is way too large for me to do this. ",
"It's a bug on our side, I'll update the dataset viewer ;)\r\n\r\nThanks for reporting !",
"Apparently this happened because of bad modifications in the README.md split metadata.\r\n\r\nI fixed them in this PR: https://huggingface.co/datasets/Antreas/TALI-base/discussions/1",
"@lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact. ",
"Also, just found another related issue. One of the many that make things hang or fail when pushing to hub. \r\n\r\nIn the following code:\r\n\r\n```python\r\ntrain_generator = lambda: data_generator(\"train\", percentage=1.0)\r\n val_generator = lambda: data_generator(\"val\")\r\n test_generator = lambda: data_generator(\"test\")\r\n\r\n train_data = datasets.Dataset.from_generator(\r\n train_generator,\r\n num_proc=mp.cpu_count(),\r\n writer_batch_size=5000,\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n val_data = datasets.Dataset.from_generator(\r\n val_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n test_data = datasets.Dataset.from_generator(\r\n test_generator,\r\n writer_batch_size=5000,\r\n num_proc=mp.cpu_count(),\r\n cache_dir=tali_dataset_dir,\r\n )\r\n\r\n print(f\"Pushing TALI-large to hub\")\r\n\r\n dataset = datasets.DatasetDict(\r\n {\"train\": train_data, \"val\": val_data, \"test\": test_data}\r\n )\r\n succesful_competion = False\r\n\r\n while not succesful_competion:\r\n try:\r\n dataset.push_to_hub(repo_id=\"Antreas/TALI-large\", max_shard_size=\"2GB\")\r\n succesful_competion = True\r\n except Exception as e:\r\n print(e)\r\n ```\r\n \r\n \r\n Things keep failing in the push_to_repo step, at random places, with the following error:\r\n \r\n ```bash\r\n Pushing dataset shards to the dataset hub: 7%|โโโโโโโโโโโ | 67/950 [42:41<9:22:37, 38.23s/it]\r\nError while uploading 'data/train-00067-of-00950-a4d179ed5a593486.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:01<00:00, 1.81ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:11<00:00, 11.20s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.48ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:15<00:00, 15.30s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.39ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:11<00:00, 11.52s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.47ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.39s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.26ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:38<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 7%|โโโโโโโโโโโโ | 71/950 [44:37<9:12:28, 37.71s/it]\r\nError while uploading 'data/train-00071-of-00950-72bab6e5cb223aee.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.18ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.94s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.36ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.67s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.57ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.16s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.68ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:09<00:00, 9.63s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.36ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.67s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.37ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:39<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 8%|โโโโโโโโโโโโ | 76/950 [46:21<8:53:08, 36.60s/it]\r\nError while uploading 'data/train-00076-of-00950-b90e4e3b433db179.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.21ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:25<00:00, 25.40s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:01<00:00, 1.56ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.40s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.49ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:23<00:00, 23.53s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.27ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.25s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.42ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:11<00:00, 11.03s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.39ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:39<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 9%|โโโโโโโโโโโโโ | 81/950 [48:30<8:40:22, 35.93s/it]\r\nError while uploading 'data/train-00081-of-00950-84b0450a1df093a9.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.18ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:11<00:00, 11.65s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:01<00:00, 1.92ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:38<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 9%|โโโโโโโโโโโโโ | 82/950 [48:55<8:37:57, 35.80s/it]\r\nError while uploading 'data/train-00082-of-00950-0a1f52da35653e08.parquet' to the Hub.\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.31ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:26<00:00, 26.29s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.42ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.57s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.64ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:10<00:00, 10.35s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.64ba/s]\r\nUpload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:11<00:00, 11.74s/it]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 2.31ba/s]\r\nUpload 1 LFS files: 0%| | 0/1 [16:40<?, ?it/s]\r\nPushing dataset shards to the dataset hub: 9%|โโโโโโโโโโโโโโ | 86/950 [50:48<8:30:25, 35.45s/it]\r\nError while uploading 'data/train-00086-of-00950-e1cc80dd17191b20.parquet' to the Hub.\r\n```\r\n\r\nI have a while loop that forces retries, but it seems that the progress itself is randomly getting lost as well. Any ideas on how to improve this? It has been blocking me for way too long. \r\n\r\nShould I build the parquet manually and then push manually as well? If I do things manually, how can I ensure my dataset works properly with \"stream=True\"? \r\n\r\nThank you for your help and time. ",
"> @lhoestq It's a bit odd that when uploading a dataset, one set at a time \"train\", \"val\", \"test\", the push_to_hub function overwrites the readme and removes differently named sets from previous commits. i.e., you push \"val\", all is well. Then you push \"test\", and the \"val\" entry disappears from the readme, while the data remain intact.\r\n\r\nHmm this shouldn't happen. What code did you run exactly ? Using which version of `datasets` ?",
"> I have a while loop that forces retries, but it seems that the progress itself is randomly getting lost as well. Any ideas on how to improve this? It has been blocking me for way too long.\r\n\r\nCould you also print the cause of the error (`e.__cause__`) ? Or show the full stack trace when the error happens ?\r\nThis would give more details about why it failed and would help investigate.",
"> Should I build the parquet manually and then push manually as well? If I do things manually, how can I ensure my dataset works properly with \"stream=True\"?\r\n\r\nParquet is supported out of the box ^^\r\n\r\nIf you want to make sure it works as expected you can try locally first:\r\n```python\r\nds = load_dataset(\"path/to/local\", streaming=True)\r\n```",
"@lhoestq @AntreasAntoniou I transferred this issue to the `datasets` repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating [tqdm](https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120204) and [pyarrow](https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120278) or [setting a lower `max_shard_size`](https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120328)).\r\n\r\n~For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to `save_to_disk` first and then upload it manually/with a script (see [upload_folder](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.~\r\n\r\n**EDIT:** removed suggestion about saving to disk first (see https://github.com/huggingface/datasets/issues/5990#issuecomment-1607186914).",
"> @lhoestq @AntreasAntoniou I transferred this issue to the datasets repository as the questions and answers are more related to this repo. Hope it can help other users find the bug and fixes more easily (like updating https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120204 and https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120278 or https://github.com/huggingface/datasets/issues/5990#issuecomment-1607120328).\r\n\r\nthanks :)\r\n\r\n> For the initial \"pushing large dataset consistently hangs\"-issue, I still think it's best to try to save_to_disk first and then upload it manually/with a script (see [upload_folder](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder)). It's not the most satisfying solution but at least it would confirm from where the problem comes from.\r\n\r\nAs I've already said in other discussions, I would not recommend pushing files saved with `save_to_disk` to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with `save_to_disk`, which is meant for disk only.",
"> As I've already said in other discussions, I would not recommend pushing files saved with save_to_disk to the Hub but save to parquet shards and upload them instead. The Hub does not support datasets saved with save_to_disk, which is meant for disk only.\r\n\r\nWell noted, thanks. That part was not clear to me :)",
"Sorry for not replying in a few days, I was on leave. :) \r\n\r\nSo, here are more information as to the error that causes some of the delay\r\n\r\n```bash\r\nPushing Antreas/TALI-tiny to hub\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:24<00:00, 4.06s/ba]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:24<00:00, 4.15s/ba]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:26<00:00, 4.45s/ba]\r\n/opt/conda/envs/main/lib/python3.10/site-packages/huggingface_hub/lfs.py:310: UserWarning: hf_transfer is enabled but does not support uploading from bytes or BinaryIO, falling back to regular upload\r\n warnings.warn(\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:25<00:00, 4.26s/ba]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:27<00:00, 4.58s/ba]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 6/6 [00:24<00:00, 4.10s/ba]\r\nPushing dataset shards to the dataset hub: 22%|โโโโโโโโโโโโโโโโโโโโโโโโโ | 5/23 [52:23<3:08:37, 628.74s/it]\r\nException: Error while uploading 'data/train-00005-of-00023-e224d901fd65e062.parquet' to the Hub., with stacktrace: <traceback object at 0x7f745458d0c0>, and type: <class 'RuntimeError'>, and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n/lfs.huggingface.co/repos/7c/d3/7cd385d9324302dc13e3986331d72d9be6fa0174c63dcfe0e08cd474f7f1e8b7/3415166ae28c0beccbbc692f38742b8dea2c197f5c805321104e888d21d7eb90?X-Amz-Algorithm=AWS4-HMAC-SHA256\r\n&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230627%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230627T003349Z&X-Amz-Expires=86400&X-Amz-Signature=5a12ff96f2\r\n91f644134170992a6628e5f3c4e7b2e7fc3e940b4378fe11ae5390&X-Amz-SignedHeaders=host&partNumber=1&uploadId=JSsK8r63XSF.VlKQx3Vf8OW4DEVp5YIIY7LPnuapNIegsxs5EHgM1p4u0.Nn6_wlPlQnvxm8HKMxZhczKE9KB74t0etB\r\noLcxqBIvsgey3uXBTZMAEGwU6y7CDUADiEIO&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\n```\r\n\r\nOne issue is that the uploading does not continue from the chunk it failed off. It often continues from a very old chunk. e.g. if it failed on chunk 192/250, it will continue from say 53/250, and this behaviour appears almost random. ",
"Are you using a proxy of some sort ?",
"I am using a kubernetes cluster built into a university VPN. ",
"So, other than the random connection drops here and there, any idea why the progress does not continue where it left off?\r\n\r\n```bash\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 10.79ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 13.65ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 13.39ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 13.04ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 13.52ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 12.28ba/s]\r\nPushing dataset shards to the dataset hub: 20%|โโโโโโโโโโโโโโโโโโโโโโ | 75/381 [1:34:39<6:26:11, 75.72s/it]\r\nException: Error while uploading 'data/train-00075-of-00381-1614bc251b778766.parquet' to the Hub., with stacktrace: <traceback object at 0x7fab6d9a4980>, and type: <class 'RuntimeError'>, and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n/lfs.huggingface.co/repos/3b/31/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5/ed8dae933fb79ae1ef5fb1f698f5125d3e1c02977ac69438631f152bb3bfdd1e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T053004Z&X-Amz-Expires=86400&X-Amz-Signature=da2b26270edfd6d0\r\nd069c015a5a432031107a8664c3f0917717e5e40c688183c&X-Amz-SignedHeaders=host&partNumber=1&uploadId=2erWGHTh3ICqBLU_QvHfnygZ2tkMWbL0rEqpJdYohCKHUHnfwMjvoBIg0TI_KSGn4rSKxUxOyqSIzFUFSRSzixZeLeneaXJOw.Qx8\r\nzLKSV5xV7HRQDj4RBesNve6cSoo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 12.09ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 11.51ba/s]\r\nCreating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 28/28 [00:02<00:00, 10.77ba/s]\r\nPushing dataset shards to the dataset hub: 20%|โโโโโโโโโโโโโโโโโโโโโโโ | 77/381 [1:32:50<6:06:34, 72.35s/it]\r\nException: Error while uploading 'data/train-00077-of-00381-368b2327a9908aab.parquet' to the Hub., with stacktrace: <traceback object at 0x7fab45b27f80>, and type: <class 'RuntimeError'>, and \r\ncause: HTTPSConnectionPool(host='s3.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: \r\n/lfs.huggingface.co/repos/3b/31/3b311464573d8d63b137fcd5b40af1e7a5b1306843c88e80372d0117157504e5/9462ff2c5e61283b53b091984a22de2f41a2f6e37b681171e2eca4a998f979cb?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-\r\nAmz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230629T070510Z&X-Amz-Expires=86400&X-Amz-Signature=9ab8487b93d443cd\r\n21f05476405855d46051a0771b4986bbb20f770ded21b1a4&X-Amz-SignedHeaders=host&partNumber=1&uploadId=UiHX1B.DcoAO2QmIHpWpCuNPwhXU_o1dsTkTGPqZt1P51o9k0yz.EsFD9eKpQMwgAST3jOatRG78I_JWRBeLBDYYVNp8r0TpIdeSg\r\neUg8uwPZOCPw9y5mWOw8MWJrnBo&x-id=UploadPart (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2426)')))\r\nPush failed, retrying\r\nAttempting to push to hub\r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 8%|โโโโโโโโโ | 29/381 [27:39<5:50:03, 59.67s/it]\r\nMap: 36%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1000/2764 [00:35<00:34, 51.63 examples/Map: 72%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2000/2764 [00:40<00:15, 49.06 examples/Map: 72%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2000/2764 [00:55<00:15, 49.06 examples/Map: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2764/2764 [00:56<00:00, 48.82 examples/Pushing dataset shards to the dataset hub: 8%|โโโโโโโโโ | 30/381 [28:35<5:43:03, 58.64s/iPushing dataset shards to the dataset hub: 8%|โโโโโโโโโโ | 31/381 [29:40<5:52:18, 60.40s/iPushing dataset shards to the dataset hub: 8%|โโโโโโโโโโ | 32/381 [30:46<6:02:20, 62.29s/it] \r\nMap: 36%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ \r\n```\r\n\r\nThis is actually the issue that wastes the most time for me, and I need it fixed. Please advice on how I can go about it.\r\n\r\nNotice how the progress goes from \r\n| 77/381 to 30/381",
"If the any shard is missing on the Hub, it will re-upload it. It looks like the 30th shard was missing on the Hub in your case. \r\n\r\nIt also means that the other files up to the 77th that were successfully uploaded won't be uploaded again.\r\n\r\ncc @mariosasko who might know better"
] |
2023-06-10T14:46:47Z
|
2025-02-15T09:29:10Z
| null |
NONE
| null | null | null | null |
### Describe the bug
Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help.
I already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it.
### Reproduction
```python
import multiprocessing as mp
import pathlib
from math import ceil
import datasets
import numpy as np
from tqdm.auto import tqdm
from tali.data.data import select_subtitles_between_timestamps
from tali.utils import load_json
tali_dataset_dir = "/data/"
if __name__ == "__main__":
full_dataset = datasets.load_dataset(
"Antreas/TALI", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir
)
def data_generator(set_name, percentage: float = 1.0):
dataset = full_dataset[set_name]
for item in tqdm(dataset):
video_list = item["youtube_content_video"]
video_list = np.random.choice(
video_list, int(ceil(len(video_list) * percentage))
)
if len(video_list) == 0:
continue
captions = item["youtube_subtitle_text"]
captions = select_subtitles_between_timestamps(
subtitle_dict=load_json(
captions.replace(
"/data/",
tali_dataset_dir,
)
),
starting_timestamp=0,
ending_timestamp=100000000,
)
for video_path in video_list:
temp_path = video_path.replace("/data/", tali_dataset_dir)
video_path_actual: pathlib.Path = pathlib.Path(temp_path)
if video_path_actual.exists():
item["youtube_content_video"] = open(video_path_actual, "rb").read()
item["youtube_subtitle_text"] = captions
yield item
train_generator = lambda: data_generator("train", percentage=0.1)
val_generator = lambda: data_generator("val")
test_generator = lambda: data_generator("test")
train_data = datasets.Dataset.from_generator(
train_generator,
num_proc=mp.cpu_count(),
writer_batch_size=5000,
cache_dir=tali_dataset_dir,
)
val_data = datasets.Dataset.from_generator(
val_generator,
writer_batch_size=5000,
num_proc=mp.cpu_count(),
cache_dir=tali_dataset_dir,
)
test_data = datasets.Dataset.from_generator(
test_generator,
writer_batch_size=5000,
num_proc=mp.cpu_count(),
cache_dir=tali_dataset_dir,
)
dataset = datasets.DatasetDict(
{
"train": train_data,
"val": val_data,
"test": test_data,
}
)
succesful_competion = False
while not succesful_competion:
try:
dataset.push_to_hub(repo_id="Antreas/TALI-small", max_shard_size="5GB")
succesful_competion = True
except Exception as e:
print(e)
```
### Logs
```shell
Pushing dataset shards to the dataset hub: 33%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 7/21 [24:33<49:06, 210.45s/it]
Error while uploading 'data/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub.
Pushing split train to the Hub.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 46/46 [42:10<00:00, 55.01s/it]
Pushing split val to the Hub.
Resuming upload of the dataset shards.
Creating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:01<00:00, 1.55ba/s]
Upload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:23<00:00, 23.51s/it]
Creating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:02<00:00, 1.39ba/s]
Upload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:30<00:00, 30.19s/it]
Creating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:02<00:00, 1.28ba/s]
Upload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:24<00:00, 24.08s/it]
Creating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:02<00:00, 1.42ba/s]
Upload 1 LFS files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:23<00:00, 23.97s/it]
Creating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:02<00:00, 1.49ba/s]
Creating parquet from Arrow format: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:02<00:00, 1.54ba/s^
Upload 1 LFS files: 0%| | 0/1 [04:42<?, ?it/s]
Pushing dataset shards to the dataset hub: 52%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 11/21 [17:23<15:48, 94.82s/it]
That's where it got stuck
```
### System info
```shell
- huggingface_hub version: 0.15.1
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: Antreas
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.0.dev20230606+cu121
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.5.0
- hf_transfer: N/A
- gradio: N/A
- numpy: 1.24.3
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5990/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5990/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5939
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5939/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5939/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5939/events
|
https://github.com/huggingface/datasets/issues/5939
| 1,749,955,883
|
I_kwDODunzps5oTjUr
| 5,939
|
.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flckv",
"id": 103381497,
"login": "flckv",
"node_id": "U_kgDOBil5-Q",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"repos_url": "https://api.github.com/users/flckv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flckv",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] |
2023-06-09T14:01:34Z
|
2023-06-12T12:19:34Z
|
2023-06-12T12:19:19Z
|
NONE
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flckv",
"id": 103381497,
"login": "flckv",
"node_id": "U_kgDOBil5-Q",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"repos_url": "https://api.github.com/users/flckv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flckv",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5939/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5939/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5936
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5936/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5936/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5936/events
|
https://github.com/huggingface/datasets/issues/5936
| 1,748,424,388
|
I_kwDODunzps5oNtbE
| 5,936
|
Sequence of array not supported for most dtype
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Related, `float16` is the only dtype not supported by `Array2D` (probably by every `ArrayND`):\r\n\r\n```python\r\nfrom datasets import Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # ok\r\n \"int16\", # ok\r\n \"int32\", # ok\r\n \"int64\", # ok\r\n \"uint8\", # ok\r\n \"uint16\", # ok\r\n \"uint32\", # ok\r\n \"uint64\", # ok\r\n \"float16\", # failed\r\n \"float32\", # ok\r\n \"float64\", # ok\r\n]:\r\n features = Features({\"foo\": Array2D(dtype=dtype, shape=(3, 4))})\r\n array = np.zeros((3, 4), dtype=dtype)\r\n try:\r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n except Exception as e:\r\n print(f\"Failed for dtype={dtype}\")\r\n```",
"Here's something I can't explain:\r\n\r\nWhen an array is encoded in the `from_dict` method, the numpy array is converted to a list (thus losing the original dtype, which is transfromed to the nearest builtin Python type)\r\n\r\nhttps://github.com/huggingface/datasets/blob/6ee61e6e695b1df9f232d47faf3a5e2b30b33737/src/datasets/features/features.py#L524-L525\r\n\r\nHowever, later on, this same data is written to memory, and it seems authorized that the data is an array (or in this case, a list of arrays). \r\n\r\nhttps://github.com/huggingface/datasets/blob/6ee61e6e695b1df9f232d47faf3a5e2b30b33737/src/datasets/arrow_writer.py#L185-L186\r\n\r\nSo the question is: why convert it to a Python list? This seems to be quite expensive both in terms of write time (all data is copied) and memory (e.g., an int8 is converted to an int64).\r\n\r\nFinally, if I try to remove this step, it solves all the previous problems, and it seems to me that it doesn't break anything (the CI passes without problem).",
"Arrow only support 1d numpy arrays, so we convert multidim arrays to lists of 1s arrays (and keep the dtype).\r\n\r\nThough you noticed that it's concerting to lists and lose the dtype. If it's the case then it's a bug.",
"Ok the conversion to list shouldn't be there indeed ! Could you open a PR to remove it ?"
] |
2023-06-08T18:18:07Z
|
2023-06-14T15:03:34Z
|
2023-06-14T15:03:34Z
|
MEMBER
| null | null | null | null |
### Describe the bug
Create a dataset composed of sequence of array fails for most dtypes (see code below).
### Steps to reproduce the bug
```python
from datasets import Sequence, Array2D, Features, Dataset
import numpy as np
for dtype in [
"bool", # ok
"int8", # failed
"int16", # failed
"int32", # failed
"int64", # ok
"uint8", # failed
"uint16", # failed
"uint32", # failed
"uint64", # failed
"float16", # failed
"float32", # failed
"float64", # ok
]:
features = Features({"foo": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})
sequence = [
[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]],
]
array = np.array(sequence, dtype=dtype)
try:
dataset = Dataset.from_dict({"foo": [array]}, features=features)
except Exception as e:
print(f"Failed for dtype={dtype}")
```
Traceback for `dtype="int8"`:
```
Traceback (most recent call last):
File "/home/qgallouedec/datasets/a.py", line 29, in <module>
raise e
File "/home/qgallouedec/datasets/a.py", line 26, in <module>
dataset = Dataset.from_dict({"foo": [array]}, features=features)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 899, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 799, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 3725, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 5254, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 350, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 236, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 204, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2091, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2139, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 879, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: int8>>, got list<item: list<item: int64>>
```
### Expected behavior
Not to fail.
### Environment info
- Python 3.10.6
- datasets: master branch
- Numpy: 1.23.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5936/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5936/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5931
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5931/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5931/events
|
https://github.com/huggingface/datasets/issues/5931
| 1,745,408,784
|
I_kwDODunzps5oCNMQ
| 5,931
|
`datasets.map` not reusing cached copy by default
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on the default caching mechanism."
] |
2023-06-07T09:03:33Z
|
2023-06-21T16:15:40Z
|
2023-06-21T16:15:40Z
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5931/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5930
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5930/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5930/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5930/events
|
https://github.com/huggingface/datasets/issues/5930
| 1,745,184,395
|
I_kwDODunzps5oBWaL
| 5,930
|
loading private custom dataset script - authentication error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flckv",
"id": 103381497,
"login": "flckv",
"node_id": "U_kgDOBil5-Q",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"repos_url": "https://api.github.com/users/flckv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flckv",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This issue seems to have been resolved, so I'm closing it."
] |
2023-06-07T06:58:23Z
|
2023-06-15T14:49:21Z
|
2023-06-15T14:49:20Z
|
NONE
| null | null | null | null |
### Describe the bug
Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?
I am logged in in the terminal, in the browser. I receive this error:
/python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels `(ConnectionError('Unauthorized for URL `https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))
when I added: `use_auth_token=True` and logged in via terminal then I received error:
or the same error in different format:
raise ConnectionError(f"`Couldn't reach {url} (error {response.status_code}`)")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels (`error 401`)
### Steps to reproduce the bug
1. cloned transformers library locally:
https://huggingface.co/docs/transformers/v4.15.0/examples :
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> cd /transformers/examples/pytorch/audio-classification
> pip install -r requirements.txt
2. created **loading script**
> https://huggingface.co/docs/datasets/dataset_script added next to dataset:
3. uploaded **private custom dataset** with loading script to HuggingFace
> https://huggingface.co/docs/datasets/dataset_script
4. added dataset loading script to **local directory** in the above cloned transformers library:
> cd /transformers/examples/pytorch/audio-classification
5. logged in to HuggingFace on local terminal with :
> **huggingface-cli login**
6. run the model with the custom dataset stored on HuggingFace with code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
cd /transformers/examples/pytorch/audio-classification
> python run_audio_classification.py \
> --model_name_or_path facebook/wav2vec2-base \
> --output_dir l/users/flck/outputs/wav2vec2-base-s \
> --overwrite_output_dir \
> --dataset_name s \
> --dataset_config_name s \
> --remove_unused_columns False \
> --do_train \
> --do_eval \
> --fp16 \
> --learning_rate 3e-5 \
> --max_length_seconds 1 \
> --attention_mask False \
> --warmup_ratio 0.1 \
> --num_train_epochs 5 \
> --per_device_train_batch_size 32 \
> --gradient_accumulation_steps 4 \
> --per_device_eval_batch_size 32 \
> --dataloader_num_workers 4 \
> --logging_strategy steps \
> --logging_steps 10 \
> --evaluation_strategy epoch \
> --save_strategy epoch \
> --load_best_model_at_end True \
> --metric_for_best_model accuracy \
> --save_total_limit 3 \
> --seed 0 \
> --push_to_hub \
> **--use_auth_token=True**
### Expected behavior
Be able to train a model the https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/ run_audio_classification.py with private custom dataset stored on HuggingFace.
### Environment info
- datasets version: 2.12.0
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5930/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5930/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5929/events
|
https://github.com/huggingface/datasets/issues/5929
| 1,744,478,456
|
I_kwDODunzps5n-qD4
| 5,929
|
Importing PyTorch reduces multiprocessing performance for map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Maxscha",
"id": 12814709,
"login": "Maxscha",
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Maxscha",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.",
"Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigations after your comment and figured out it's only affecting some hardware/software configurations with the `pytorch` installation of `conda-forge`. Based on this we found the following issue in PyTorch: https://github.com/pytorch/pytorch/issues/102269 with a quick fix for now.\r\n\r\nSince it seems to be a deeper issue with forking processes, the difference between`multiprocess` and `multiprocessing` didn't make a difference.\r\n\r\nClosing this, since the issue comes from `pytorch` not `dataset`. \r\n"
] |
2023-06-06T19:42:25Z
|
2023-06-16T13:09:12Z
|
2023-06-16T13:09:12Z
|
NONE
| null | null | null | null |
### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
Takes around 4 seconds on my machine.
While the same code, but with an `import torch`:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
import torch
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
takes around 22 seconds.
### Expected behavior
I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
- torch: 2.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Maxscha",
"id": 12814709,
"login": "Maxscha",
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Maxscha",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5929/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5927
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5927/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5927/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5927/events
|
https://github.com/huggingface/datasets/issues/5927
| 1,744,009,032
|
I_kwDODunzps5n83dI
| 5,927
|
`IndexError` when indexing `Sequence` of `Array2D` with `None` values
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Easy fix would be to add:\r\n\r\n```python\r\nnull_indices -= np.arange(len(null_indices))\r\n```\r\n\r\nbefore L279, but I'm not sure it's the most intuitive way to fix it.",
"Same issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7fcbe5b1575c8d162b65b9397b3dfda995a4e048/src/datasets/features/features.py#L1398\r\n\r\nFixed in #5948 "
] |
2023-06-06T14:36:22Z
|
2023-06-13T12:39:39Z
|
2023-06-09T13:23:50Z
|
MEMBER
| null | null | null | null |
### Describe the bug
Having `None` values in a `Sequence` of `ArrayND` fails.
### Steps to reproduce the bug
```python
from datasets import Array2D, Dataset, Features, Sequence
data = [
[
[[0]],
None,
None,
]
]
feature = Sequence(Array2D((1, 1), dtype="int64"))
dataset = Dataset.from_dict({"a": data}, features=Features({"a": feature}))
dataset[0] # error raised only when indexing
```
```
Traceback (most recent call last):
File "/Users/quentingallouedec/gia/c.py", line 13, in <module>
dataset[0] # error raised only when indexing
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2658, in __getitem__
return self._getitem(key)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2643, in _getitem
formatted_output = format_table(
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 634, in format_table
return formatter(pa_table, query_type=query_type)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 406, in __call__
return self.format_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 441, in format_row
row = self.python_arrow_extractor().extract_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 144, in extract_row
return _unnest(pa_table.to_pydict())
File "pyarrow/table.pxi", line 4146, in pyarrow.lib.Table.to_pydict
File "pyarrow/table.pxi", line 1312, in pyarrow.lib.ChunkedArray.to_pylist
File "pyarrow/array.pxi", line 1521, in pyarrow.lib.Array.to_pylist
File "pyarrow/scalar.pxi", line 675, in pyarrow.lib.ListScalar.as_py
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 760, in to_pylist
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 725, in to_numpy
numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)
File "<__array_function__ internals>", line 200, in insert
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/numpy/lib/function_base.py", line 5426, in insert
old_mask[indices] = False
IndexError: index 3 is out of bounds for axis 0 with size 3
```
AFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.
I strongly suspect that the problem comes from this line, or `np.insert` is misused:
https://github.com/huggingface/datasets/blob/02ee418831aba68d0be93227bce8b3f42ef8980f/src/datasets/features/features.py#L729
To put t simply, you want something that do that:
```python
import numpy as np
numpy_arr = np.zeros((1, 1, 1))
null_indices = np.array([1, 2])
np.insert(numpy_arr, null_indices, np.nan, axis=0)
# raise an error, instead of outputting
# array([[[ 0.]],
# [[nan]],
# [[nan]]])
```
### Expected behavior
The previous code should not raise an error.
### Environment info
- Python 3.10.11
- datasets 2.10.0
- pyarrow 12.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5927/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5927/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5926/events
|
https://github.com/huggingface/datasets/issues/5926
| 1,743,922,028
|
I_kwDODunzps5n8iNs
| 5,926
|
Uncaught exception when generating the splits from a dataset that miss data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @severo.\r\n\r\nThis is a known issue with `fsspec`:\r\n- #5862\r\n- https://github.com/fsspec/filesystem_spec/issues/1265"
] |
2023-06-06T13:51:01Z
|
2023-06-07T07:53:16Z
| null |
COLLABORATOR
| null | null | null | null |
### Describe the bug
Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.
But when trying to generate the split names, we get an exception which is now correctly caught.
Seen originally in https://github.com/huggingface/datasets-server/blob/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15/services/worker/src/worker/job_runners/config/parquet_and_info.py#L435
### Steps to reproduce the bug
```python
>>> from datasets import StreamingDownloadManager, load_dataset_builder
>>> builder = load_dataset_builder(path="blog_authorship_corpus")
Downloading builder script: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5.60k/5.60k [00:00<00:00, 23.1MB/s]
Downloading metadata: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.81k/2.81k [00:00<00:00, 14.7MB/s]
Downloading readme: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7.30k/7.30k [00:00<00:00, 30.8MB/s]
>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)
>>> builder._split_generators(dl_manager)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/blog_authorship_corpus/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683/blog_authorship_corpus.py", line 79, in _split_generators
data = dl_manager.download_and_extract(_DATA_URL)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 435, in map_nested
return function(data_struct)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol
with fsspec.open(urlpath, **kwargs) as f:
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open
return open_files(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__
out = super().__getitem__(item)
IndexError: list index out of range
```
### Expected behavior
We should have an Exception raised by the datasets library.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5926/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5926/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5925/events
|
https://github.com/huggingface/datasets/issues/5925
| 1,741,941,436
|
I_kwDODunzps5n0-q8
| 5,925
|
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/78868366?v=4",
"events_url": "https://api.github.com/users/mtkinit/events{/privacy}",
"followers_url": "https://api.github.com/users/mtkinit/followers",
"following_url": "https://api.github.com/users/mtkinit/following{/other_user}",
"gists_url": "https://api.github.com/users/mtkinit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtkinit",
"id": 78868366,
"login": "mtkinit",
"node_id": "MDQ6VXNlcjc4ODY4MzY2",
"organizations_url": "https://api.github.com/users/mtkinit/orgs",
"received_events_url": "https://api.github.com/users/mtkinit/received_events",
"repos_url": "https://api.github.com/users/mtkinit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtkinit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtkinit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtkinit",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] |
2023-06-05T14:46:04Z
|
2023-06-19T17:22:43Z
|
2023-06-19T17:22:43Z
|
NONE
| null | null | null | null |
### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`.
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
Thanks,
Martin
### Steps to reproduce the bug
Here, the code crashed after we updated the `datasets` library:
```python
# list_datasets no longer returns a list, which leads to an error when one tries to slice it
for datasets.list_datasets(with_details=True)[:limit]:
...
```
### Expected behavior
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
### Environment info
Ubuntu 22.04
datasets 2.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5925/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5923/events
|
https://github.com/huggingface/datasets/issues/5923
| 1,737,436,227
|
I_kwDODunzps5njyxD
| 5,923
|
Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/71412682?v=4",
"events_url": "https://api.github.com/users/ehuangc/events{/privacy}",
"followers_url": "https://api.github.com/users/ehuangc/followers",
"following_url": "https://api.github.com/users/ehuangc/following{/other_user}",
"gists_url": "https://api.github.com/users/ehuangc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ehuangc",
"id": 71412682,
"login": "ehuangc",
"node_id": "MDQ6VXNlcjcxNDEyNjgy",
"organizations_url": "https://api.github.com/users/ehuangc/orgs",
"received_events_url": "https://api.github.com/users/ehuangc/received_events",
"repos_url": "https://api.github.com/users/ehuangc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ehuangc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehuangc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ehuangc",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; print(pyarrow.__file__)\"\r\n```\r\n\r\n\r\n",
"> Based on [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187), this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n> \r\n> Can you please execute the following commands in the terminal and paste the output here?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\n\r\nHere is the output to the first command:\r\n```\r\narrow-cpp 11.0.0 py39h7f74497_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n```\r\nand the second:\r\n```\r\n/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/__init__.py\r\n```\r\nThanks!\r\n\r\n\r\n\r\n",
"after installing pytesseract 0.3.10, I got the above error. FYI ",
"RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\npyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject",
"I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n\r\nDo we need to update dependencies? ",
"Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291",
"For conda with python3.8.16 this solved my problem! thanks!\r\n\r\n> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies? I can work on that if no one else is working on it.\r\n\r\n",
"Thanks for replying. I am not sure about those environments but it seems like pyarrow-12.0.0 does not work for conda with python 3.8.16. \r\n\r\n> Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291\r\n\r\n",
"Got the same error with:\r\n\r\n```\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n\r\npython 3.10.11 h7a1cb2a_2 \r\n\r\ndatasets 2.13.0 pyhd8ed1ab_0 conda-forge\r\n```",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis solved the issue for me as well.",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nSolved it for me also",
"> ๅบไบ [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187)๏ผ่ฟๅฏ่ฝๆๅณ็ๆจ็ๅฎ่ฃ
ไธ ไธๅ
ผๅฎนใ`pyarrow``datasets`\r\n> \r\n> ๆจ่ฝๅฆๅจ็ป็ซฏไธญๆง่กไปฅไธๅฝไปคๅนถๅฐ่พๅบ็ฒ่ดดๅฐๆญคๅค๏ผ\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\n/root/miniconda3/lib/python3.10/site-packages/pyarrow/__init__.py",
"Got the same problem with\r\n\r\narrow-cpp 11.0.0 py310h1fc3239_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\nminiforge3/envs/mlp/lib/python3.10/site-packages/pyarrow/__init__.py\r\n\r\nReverting back to pyarrow 11 solved the problem.\r\n",
"Solved with `pip install pyarrow==11.0.0`",
"I got different. Solved with\r\npip install pyarrow==12.0.1\r\npip install cchardet\r\n\r\nenv:\r\nPython 3.9.16\r\ntransformers 4.32.1",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis works for me as well",
"> I got different. Solved with pip install pyarrow==12.0.1 pip install cchardet\r\n> \r\n> env: Python 3.9.16 transformers 4.32.1\r\n\r\nI guess it also depends on the Python version. I got Python 3.11.5 and pyarrow==12.0.0. \r\nIt works! ",
"Hi, if this helps anyone, pip install pyarrow==11.0.0 did not work for me (I'm using Colab) but this worked: \r\n!pip install --extra-index-url=https://pypi.nvidia.com cudf-cu11",
"> Hi, if this helps anyone, pip install pyarrow==11.0.0 did not work for me (I'm using Colab) but this worked: !pip install --extra-index-url=https://pypi.nvidia.com cudf-cu11\r\n\r\nthanks! I met the same problem and your suggestion solved it.",
"(I was doing quiet install so I didn't notice it initially)\r\nI've been loading the same dataset for months on Colab, just now I got this error as well. I think Colab has changed their image recently (I had some errors regarding CUDA previously as well). beware of this and restart runtime if you're doing quite pip installs.\r\nmoreover installing stable version of datasets on pypi gives this:\r\n\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nibis-framework 7.1.0 requires pyarrow<15,>=2, but you have pyarrow 15.0.0 which is incompatible.\r\nSuccessfully installed datasets-2.17.0 dill-0.3.8 multiprocess-0.70.16 pyarrow-15.0.0\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n``` \r\n",
"for colab - pip install pyarrow==11.0.0",
"The above methods didn't help me. So I installed an older version: `!pip install datasets==2.16.1`\r\nand `import datasets` worked!!",
"@rasith1998 @PennlaineChu You can avoid this issue by restarting the session after the `datasets` installation (see https://github.com/huggingface/datasets/issues/6661 for more info)\r\n\r\nAlso, we've contacted Google Colab folks to update the default PyArrow installation, so the issue should soon be \"officially\" resolved on their side.",
"> Also, we've contacted Google Colab folks to update the default PyArrow installation, so the issue should soon be \"officially\" resolved on their side.\r\n\r\nThis has been done! Google Colab now pre-installs PyArrow 14.0.2, which makes this issue unlikely to happen, so I'm closing it.",
"I am facing this issue outside of Colab, in a normal Python (3.10.14) environment:\r\n```\r\npyarrow==11.0.0\r\ndatasets=2.20.0\r\ntransformers==4.41.2\r\n```\r\n\r\nWhat can I do to solve it?\r\n\r\nI am somewhat bound to `pyarrow==11.0.0`. Is there a version of `datasets` that supports this?"
] |
2023-06-02T04:16:32Z
|
2024-06-27T10:07:49Z
|
2024-02-25T16:38:03Z
|
NONE
| null | null | null | null |
### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5923/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.