Joseph Robert Turcotte's picture
Open to Work

Joseph Robert Turcotte PRO

Fishtiks

AI & ML interests

Roleplaying, lorabration, abliteration, smol models, extensive filtering, unusual datasets, home usage, HPCs for AI, distributed training/federated learning, and sentience. AI should find and label AI hallucinations with GANs so we can give them context and use.

Recent Activity

reacted to Ujjwal-Tyagi's post with πŸ‘ about 7 hours ago
For more better details and analysis, you can read the article here: https://huggingface.co/blog/Ujjwal-Tyagi/steering-not-censoring, We are sleepwalking into a crisis. I am deeply concerned about AI model safety right now because, as the community rushes to roll out increasingly powerful open-source models, we are completely neglecting the most critical aspect: safety. It seems that nobody is seriously thinking about the potential consequences of unregulated model outputs or the necessity of robust guardrails. We are essentially planting the seeds for our own destruction if we prioritize raw performance over security. This negligence is terrifyingly evident when you look at the current landscape. Take Qwen Image 2512, for example; while it delivers undeniably strong performance, it has incredibly weak guardrails that make it dangerous to deploy. In stark contrast, Z Image might not get as much hype for its power, but it has much better safety guardrails than Qwen Image 2512. It is imperative that the open-source community and developers recognize that capability without responsibility is a liability. We must actively work on protecting these models from bad actors who seek to exploit them for malicious purposes, such as generating disinformation, creating non-consensual imagery, or automating cyberattacks. It is no longer enough to simply release a powerful model; we must build layers of defense that make it resistant to jailbreaking and adversarial attacks. Developers need to prioritize alignment and robust filtering techniques just as much as they prioritize benchmark scores. We cannot hand such potent tools to the world without ensuring they have the safety mechanisms to prevent them from being turned against us.
replied to their post 2 days ago
Have extra processing power in downtime? Have old devices you haven't used in years? I recommend primarily Folding@home to do protein folding on your GPUs, but also BOINC, particularly on Android and Apple devices, because of the lower power usage. I've been doing this, and get about 14,000 hours a week in, primarily for mapping cancer markers on BOINC on an Aiyara cluster of Androids. I also hold a sign out by the highway encouraging people to join BOINC. It was Dylan Bucci, a young promoter of BOINC on school's computers, who wished before he died to get as many people on as possible to do this, and in his honor, the Dylan Bucci challenge was implemented. No reason to wait for a challenge. If you care about such things, there is an associated cryptocurrency for such processing, but it's worth it to save lives. I look forward to AI-related endeavors like this, and only know of NATIX Drive&, Acurast, and HYRA AI, all of which use Androids I'd rather devote to BOINC. However, they also allow one to be paid, and to totally devote old devices to the processing. On the same topic, DexPOINT monitizes your Android's Internet connection. BOINC runs on Android, Apple, PCs of all sorts, Pi devices, Chrome devices, Fire Sticks, TV boxes, Android watches with the full OS, and all sorts of things that have Android or the ability to run Linux, although it will also run on Windows. Folding@home works best on PCs with modern NVIDIA GPUs, and in a cool room. You can also run BOINC on modern computers, but they must be throttled, because they often get too hot.
View all activity

Organizations

Smilyai labs's profile picture Smilyai labs community's profile picture XORTRON - Criminal Computing's profile picture