You're asking Claude to refactor multiple different job types all at once, which creates too much complexity in a single pass. The prompt itself is also somewhat unclear about the specific transformations needed.
Try this:
1. Break it down by job type. Instead of "refactor the codebase to make use of the new JobDefinition.create", identify each distinct job type and refactor them one at a time. This keeps the context focused and prevents the agent from getting overwhelmed.
2. For many jobs, script it. If you have dozens/hundreds of jobs to refactor, write a shell script that:
for job_type in "EmailJob" "DataProcessingJob" "ReportJob"; do
claude --dangerously-skip-permissions -p "Refactor only ${job_type} to use the new JobDefinition.create signature: make it async, pass databaseClient at creation, remove return value and 'Job created' logs. Change ONLY ${job_type} files."
git add -A && git commit -m "Refactor ${job_type} to new signature"
done
This creates atomic commits you can review/revert individually.3. Consider a migration shim. Have Claude create a compatibility layer so jobs can work with either the old or new signature during the refactor. This lets you test incrementally without breaking everything at once.
4. Your prompt needs clarity. Here's a clearer version:
Refactor ONLY [SpecificJobName] class to match the new JobDefinition.create signature:
- OLD: create(batch) returns result, synchronous
- NEW: create(queue, databaseClient) returns void, async
- Remove any "Job created" console.log statements
- Do NOT modify unrelated code, reorder parameters, or rename variables
The issue with your original prompt is it doesn't clearly specify the before/after states or which specific files to target. Claude Code works best with precise, mechanical instructions rather than contextual descriptions like "Previously... Now it takes..."Pro tip: Use Claude itself to improve your prompts! Try:
claude -p "Help me write a clearer prompt for this refactoring task: [paste your original prompt]"
and save the result to a markdown file for reuse.The key insight is that agentic tools excel at focused, well-defined transformations but struggle when the scope is too broad or the instructions are ambiguous. "Don't do anything else" is not an instruction that Claude does a good job of interpreting. The "going off the rails" behavior you're seeing is Claude trying to be helpful by "improving" code it encounters, which is why explicit constraints ("ONLY do X") are crucial rather than specifying a broad directive concerning what it shouldn't do.
I'm almost positive that he didn't use thinking mode, because I've asked GPT-5 thinking his question verbatim in 10 parallel threads and it said "you can't" and thoroughly explained why every time. Below is the first response I got back.
---
Short answer: you can’t. iOS doesn’t ship a Zstandard encoder/decoder. Apple’s built-in `Compression` framework only exposes LZFSE, LZ4(/RAW), zlib/DEFLATE, and LZMA—no ZSTD knob to turn. ([Microsoft Learn][1], [Apple Developer][2])
If you’re okay with a built-in alternative that’s “zstd-ish” in speed/ratio, use LZFSE via `Compression`’s streaming API. Here’s a minimal streaming compressor you can drop in; it takes chunks of `Data` and yields compressed chunks (no third-party code):
```swift import Compression
final class StreamCompressor { private var stream = compression_stream() private let algo: compression_algorithm private var isInitialized = false
init(algorithm: compression_algorithm = COMPRESSION_LZFSE) {
self.algo = algorithm
}
func append(_ input: Data, isFinal: Bool, outChunkSize: Int = 64 * 1024) -> [Data] {
if !isInitialized {
let status = compression_stream_init(&stream, COMPRESSION_STREAM_ENCODE, algo)
precondition(status != COMPRESSION_STATUS_ERROR, "compression_stream_init failed")
isInitialized = true
}
var outputs: [Data] = []
input.withUnsafeBytes { (srcBuf: UnsafeRawBufferPointer) in
var flags: Int32 = isFinal ? Int32(COMPRESSION_STREAM_FINALIZE.rawValue) : 0
stream.src_ptr = srcBuf.bindMemory(to: UInt8.self).baseAddress
stream.src_size = srcBuf.count
var out = Data(count: outChunkSize)
out.withUnsafeMutableBytes { dstBuf in
while true {
stream.dst_ptr = dstBuf.bindMemory(to: UInt8.self).baseAddress
stream.dst_size = dstBuf.count
let status = compression_stream_process(&stream, flags)
let produced = dstBuf.count - stream.dst_size
if produced > 0 {
outputs.append(Data(bytes: dstBuf.baseAddress!, count: produced))
}
if status == COMPRESSION_STATUS_OK && stream.src_size == 0 && stream.dst_size > 0 {
break // need more input to continue
} else if status == COMPRESSION_STATUS_OK && stream.dst_size == 0 {
// output buffer filled; loop to drain more
continue
} else if status == COMPRESSION_STATUS_END {
break
} else if status == COMPRESSION_STATUS_ERROR {
preconditionFailure("compression_stream_process failed")
} else {
// Shouldn’t happen
break
}
}
}
}
return outputs
}
deinit {
if isInitialized {
compression_stream_destroy(&stream)
}
}
}
```Usage:
```swift let compressor = StreamCompressor(algorithm: COMPRESSION_LZFSE) var compressed: [Data] = [] for (i, chunk) in chunks.enumerated() { let final = i == chunks.count - 1 compressed.append(contentsOf: compressor.append(chunk, isFinal: final)) } // `compressed` now contains the zstd-adjacent (LZFSE) stream ```
If you truly need the ZSTD format (interop requirement), you’ll have to ship the reference C library (`libzstd`) in your app (static link it and call through a tiny Swift wrapper). That’s still “third-party,” even if you vendor the sources instead of using SPM/CocoaPods. There is no system-provided ZSTD on iOS today. ([GitHub][3])
tl;dr: iOS doesn’t expose ZSTD. Use `Compression` with LZFSE for built-in streaming, or vendor `libzstd` and bridge to it if you must emit/ingest ZSTD. ([Microsoft Learn][1], [Apple Developer][2], [GitHub][3])
[1]: https://learn.microsoft.com/en-us/dotnet/api/compression.com... "CompressionAlgorithm Enum (Compression)" [2]: https://developer.apple.com/documentation/compression/compre... "COMPRESSION_LZFSE | Apple Developer Documentation" [3]: https://github.com/facebook/zstd?utm_source=chatgpt.com "facebook/zstd: Zstandard - Fast real-time compression ..."
Universally across ChatGPT, Claude and Gemini, continually revising/editing a document over the course of a long conversation just gets worse and worse. I have learned the trick of exporting the document and starting a brand new conversation all over again, but there should really just be a "clear context window" button or similar to let me perpetually stay in the same chat and iterate on some writing or code without the quality of feedback/assistance degrading.
It sounds like you're generally unfamiliar with using AI to help you at all? Or maybe you're also being disingenuous? It's insanely easy to figure this stuff out, I literally know a dozen people who are not even engineers, have no programming experience, who use these tools. Here's what Claude (the free version at claude.ai) said in response to me saying "i have no idea how to use AI coding assistants, can you succinctly explain to me what i need to do? like, what do i download, run, etc in order to try different models and services, what are the best tools and what do they do?":
Here's a quick guide to get you started with AI coding assistants:
## Quick Start Options (Easiest)
*1. Web-based (Nothing to Download)* - *Claude.ai* - You're here! I can help with code, debug, explain concepts - *ChatGPT* - Similar capabilities, different model - *GitHub Copilot Chat* - Web interface if you have GitHub account
*2. IDE Extensions (Most Popular)* - *Cursor* - Full VS Code replacement with AI built-in. Download from cursor.com, works out of the box - *GitHub Copilot* - Install as VS Code/JetBrains extension ($10/month), autocompletes as you type - *Continue* - Free, open-source VS Code extension, lets you use multiple models
*3. Command Line* - *Claude Code* - Anthropic's terminal tool for autonomous coding tasks. Install via `npm install -g @anthropic-ai/claude-code` - *Aider* - Open-source CLI tool that edits files directly
## What They Do
- *Autocomplete tools* (Copilot, Cursor) - Suggest code as you type, finish functions - *Chat tools* (Claude, ChatGPT) - Explain, debug, design systems, write full programs - *Autonomous tools* (Claude Code, Aider) - Actually edit your files, make changes across codebases
## My Recommendation to Start
1. Try *Cursor* first - download it, paste in some code, and ask it questions. It's the most beginner-friendly 2. Or just start here in Claude - paste your code and I can help debug, explain, or write new features 3. Once comfortable, try GitHub Copilot for in-line suggestions while coding
The key is just picking one and trying it - you don't need to understand everything upfront!