Compare commits

...

272 Commits

Author SHA1 Message Date
David Given
2efe521b3a Update documentation. 2023-07-24 21:48:37 +02:00
David Given
5c21103646 Get the ZDOS filesystem driver working. 2023-07-24 21:46:49 +02:00
David Given
082fe4e787 Hack in boilerplate for a ZDos filesystem. 2023-07-24 08:18:18 +02:00
David Given
5e13cf23f9 Allow read-only image reader/writers in the GUI. 2023-07-24 07:53:47 +02:00
David Given
8f98a1f557 Consolidate the image constructors in the same way that's been done for the
flux constructors.
2023-07-24 07:50:16 +02:00
David Given
5b21e8798b Allow read-only flux sources in the GUI. 2023-07-24 07:39:59 +02:00
David Given
b9ef5b7db8 Rename all the flux and image types to prefix the enums, due to them being in
the global namespace now.
2023-07-24 02:18:53 +02:00
David Given
9867f8c302 Combine enums for flux source/sink types. config.cc now knows whether they're
read-only, write-only, and read-write.
2023-07-24 00:50:54 +02:00
David Given
315889faf6 Warning fix. 2023-07-23 22:49:23 +02:00
David Given
798e8fee89 Merge pull request #692 from davidgiven/protobuf
Rename the `requires` config field to `prerequisite`
2023-07-08 00:43:15 +02:00
dg
e1c49db329 Use brew --prefix to detect the installation path when copying licenses from
packages.
2023-07-07 22:10:52 +00:00
dg
dae9537472 Warning fixes. 2023-07-07 21:51:24 +00:00
dg
1330d56cdd Fix a bunch of errors caused by changes to libfmt. 2023-07-07 21:32:21 +00:00
David Given
6ce3ce20d0 Remove stray debugging code. 2023-07-07 01:03:31 +02:00
David Given
362c5ee9b0 Rename the requires config field to prerequisite, as requires is about to
become a C++ keyword.
2023-07-07 00:34:03 +02:00
David Given
0f34ce0278 Merge pull request #690 from Deledrius/nsi-fix
Fix incorrect product name in installer.
2023-06-26 14:27:39 +02:00
Joseph Davies
0c27c7c4c8 Fix incorrect product name in installer. 2023-06-25 16:18:03 -07:00
David Given
9db6efe7a2 Merge pull request #686 from davidgiven/docs
Update documentation.
2023-06-03 00:30:34 +02:00
David Given
8b8a22d7fb Add the PCB schematic. 2023-06-03 00:05:51 +02:00
David Given
0a70344bc1 Add Fedora package list. 2023-06-02 23:38:09 +02:00
David Given
e77d01911c Merge pull request #683 from davidgiven/gw
Reset the Greaseweazle data stream when connecting
2023-05-25 22:43:49 +02:00
David Given
d4c0853e1f Reset the Greaseweazle data stream when connecting. 2023-05-25 22:23:28 +02:00
David Given
363a4e959c Finally fix that format error when measuring disk speed. 2023-05-25 22:23:17 +02:00
David Given
9336a04681 Merge pull request #682 from davidgiven/docs
More documentation tweaking.
2023-05-25 22:10:10 +02:00
David Given
214ff38749 Tweak documentation layout. 2023-05-25 22:08:28 +02:00
David Given
a8f3c01d8b Add basic documentation for the extension formats. 2023-05-25 22:06:23 +02:00
David Given
4da6585ef9 Merge pull request #681 from davidgiven/bb679
Allow writing to Greaseweazle disks again by not setting hardSectorThresholdNs to inf.
2023-05-25 21:58:59 +02:00
David Given
df40100feb Merge pull request #680 from davidgiven/docs
Overhaul docs.
2023-05-25 21:40:32 +02:00
David Given
f2d92e93fb Format. 2023-05-25 21:27:49 +02:00
David Given
b4d8d569d2 Allow writing to Greaseweazle disks again by not setting hardSectorThresholdNs
to inf...
2023-05-25 21:26:44 +02:00
David Given
854b3e9c59 Better autogenerated documentation. 2023-05-25 21:14:41 +02:00
David Given
28ca2b72f1 Polishing. 2023-05-25 21:14:32 +02:00
David Given
7781c8179f Typo fix. 2023-05-25 20:20:02 +02:00
David Given
69ece3ffa0 Polish documentation. 2023-05-25 20:07:33 +02:00
David Given
53adcd92ed Spell (and capitalise) Greaseweazle correctly. 2023-05-25 19:50:05 +02:00
David Given
2bef6ca646 Merge pull request #678 from davidgiven/requirements
Overhaul config system and lots of other stuff
2023-05-16 01:29:58 +02:00
dg
bab350d771 Update Ubuntu build version. 2023-05-15 23:09:52 +00:00
dg
048dac223f Enable workflow cancelling when a new one is pushed. 2023-05-15 22:59:59 +00:00
dg
b7634da310 Work around Apple dev kit stupidity (definiting BYTE_SIZE in a standard
header...)
2023-05-15 22:51:16 +00:00
dg
882c92c64a Merge. 2023-05-15 22:49:52 +00:00
dg
4592dbe28b Add drive types for the Micropolis drives. 2023-05-15 22:49:15 +00:00
dg
edc0f21ae7 Remove all the requires TPI constraints --- I'm not sure this is a good idea. 2023-05-15 22:48:33 +00:00
dg
8715b7f6c1 Don't crash if no format is selected. 2023-05-15 22:14:06 +00:00
dg
99511910dd If an incoming FL2 file has no TPI, use the default rather than 0 (the default
will probably be zero, but anyway).
2023-05-15 22:00:03 +00:00
dg
a03478b011 Don't store the actual DriveProto in FL2 files, because it makes the proto tags
significant.
2023-05-15 21:59:24 +00:00
dg
5c428e1f07 Don't require the user to specify the drive TPI if they don't want to. 2023-05-15 21:51:05 +00:00
dg
ee57615735 Deal with invalid options in the GUI. 2023-05-15 20:55:33 +00:00
dg
67300e5769 Add the ability to validate the configuration, at least in the CLI; this may
require some refactoring for the GUI to apply cleanly.
2023-05-14 23:18:48 +00:00
dg
873e05051c Massive rework of the config system to be clearer, more robust, and more
flexible. (But it doesn't check options any more.)
2023-05-14 22:04:51 +00:00
dg
4daaec46a7 Greying out of the option buttons now works; but the whole way configs are
handled is pretty unsatisfactory and needs work.
2023-05-13 23:29:34 +00:00
dg
dd8cc7bfd4 Attempt to move the configuration setup logic into Config, so it's centralised. 2023-05-13 12:42:31 +00:00
dg
5568ac382f Eliminate Environment --- we don't use it and Config contains this
functionality.
2023-05-13 00:04:42 +00:00
dg
dcdb3e4455 Encoders and decoders are routed through Config. 2023-05-12 23:58:44 +00:00
dg
17b29b1626 Flux sinks and image writers are routed through Config. 2023-05-12 23:47:09 +00:00
dg
dcfcc6271c Sort out a whole bunch of other things, including cleaning up the way the
verification source is handled.
2023-05-12 23:28:25 +00:00
dg
1d77ba6429 ImageReaders can now contribute config. 2023-05-12 22:20:13 +00:00
dg
ff5f019ac1 Fetching the image reader is now done through Config. 2023-05-12 21:52:53 +00:00
dg
e61eeb8c6f Fetching the flux source is now done through Config. 2023-05-12 21:25:54 +00:00
dg
68d22e7f54 Fix build error. 2023-05-11 23:31:38 +00:00
dg
790f0a42e3 Move setting the image writer into Config. 2023-05-11 23:06:24 +00:00
dg
08e9e508cc Move setting the image reader into Config. 2023-05-11 23:02:05 +00:00
dg
ad1a8d608f Migrate setting the flux sink to Config. 2023-05-11 22:54:32 +00:00
dg
d74ed71023 Move setting the flux source into Config. 2023-05-11 22:47:00 +00:00
dg
0c7f9e0888 Enforce option requirements --- but the config stuff is still kinda broken and
will need rethinking, especially if flux files can carry configs with them.
2023-05-11 21:58:10 +00:00
dg
ba5f6528a8 Move option handling into Config. 2023-05-11 20:37:54 +00:00
dg
65cf552ec2 Some cleanup. 2023-05-11 20:03:25 +00:00
dg
715c0a0c42 Move config file loading into config.cc. 2023-05-11 19:58:16 +00:00
dg
9e383575d1 Any drive settings in the global config will override loaded settings from an
fl2 file.
2023-05-11 19:21:59 +00:00
dg
d84c366480 You can now fetch config fields by path. 2023-05-11 19:03:36 +00:00
dg
42e6c11081 Migrate to a new global config object. 2023-05-10 23:13:33 +00:00
dg
9ba3f90f1e Change the global config variable to a globalConfig() function. 2023-05-10 22:07:17 +00:00
dg
24ff51274b Fix formatting. 2023-05-10 21:14:30 +00:00
dg
4c4c752827 Add missing file. 2023-05-10 21:11:10 +00:00
dg
5022b67e4a Drive information is stored in FL2 files. 2023-05-10 20:47:55 +00:00
dg
6b990a9f51 Overhaul the TPI stuff; now both the drive and the layout have a TPI setting,
which must be set.
2023-05-10 19:58:44 +00:00
dg
e69ce3b8df Merge. 2023-05-10 18:31:42 +00:00
dg
cf537b6222 Add the proto part of option requirements. 2023-05-10 18:29:46 +00:00
David Given
9d1160faff Merge pull request #677 from davidgiven/errors
Clean up error handling.
2023-05-10 01:13:49 +02:00
noreply@github.com
ed4067f194 Merge pull request #677 from davidgiven/errors
Clean up error handling.
2023-05-09 23:13:49 +00:00
dg
d4b55cd8f5 Switch from Logger() to log(). 2023-05-09 22:47:36 +00:00
dg
baaeb0bca7 Fix mangled formatting caused by clang-format. 2023-05-09 21:39:35 +00:00
dg
466c3c34e5 Replace the Error() object with an error() function which takes fmt
formatspecs, making for much cleaner code. Reformatted everything.

This actually happened in multiple steps but then I corrupted my local
repository and I had to recover from the working tree.
2023-05-09 20:59:44 +00:00
dg
099d7969ca Add the drive types dropdown, plus config fragments. Change the TPI settings to
floats (because 40-track 3.5" uses a TPI of 67.5...).
2023-05-08 23:04:52 +00:00
dg
5adfa95a85 Add a preliminary format for the 8050. 2023-05-08 23:03:37 +00:00
David Given
bfa0846ad0 Merge pull request #676 from davidgiven/doc
Correct index table rendering.
2023-05-08 20:38:53 +02:00
dg
7099264334 Correct index table rendering. 2023-05-08 18:37:16 +00:00
David Given
69b44e7968 Merge pull request #674 from davidgiven/doc
Overhaul documentation.
2023-05-08 01:13:57 +01:00
dg
fe39977ff7 Remember to add links to each profile's documentation. 2023-05-07 23:51:55 +00:00
dg
b9fc8de5b5 OSX compatibility. 2023-05-07 23:33:36 +00:00
dg
f7b8022d3a Switch to the traditional unicorn/dinosaur support categorisation. 2023-05-07 23:06:56 +00:00
dg
a62346c515 Add short names to each profile. 2023-05-07 21:49:14 +00:00
dg
e372d757ad Some tidying. 2023-05-07 21:32:36 +00:00
dg
ab1b10f935 Typo fix. 2023-05-07 21:30:09 +00:00
dg
8e918706dc First draft at autogenerating the table in the README. 2023-05-07 21:28:42 +00:00
dg
76450d00bf Tidy. 2023-05-07 19:53:57 +00:00
dg
ee53542e18 Eliminate config includes, as nothing uses them any more and it just makes
things like documentation generation hard.
2023-05-07 19:35:55 +00:00
dg
db004bc787 Preparse ConfigProto objects. 2023-05-07 19:28:29 +00:00
dg
71a7f3554e Remember to actually add the documentation files... 2023-05-07 18:40:24 +00:00
dg
5c3f422a53 First pass at automatic document generation. 2023-05-07 18:36:30 +00:00
dg
2fe0cec04a Copy documentation into the config definitions. 2023-05-07 16:48:17 +00:00
David Given
de59e781b5 Merge pull request #673 from davidgiven/options
Do more options overhauling.
2023-05-07 13:21:28 +01:00
dg
8c77af651b Run corpus tests on other platforms. 2023-05-07 11:56:32 +00:00
dg
638f6928cf Fix checkouts, maybe? 2023-05-07 11:53:56 +00:00
dg
ccc8e597a7 Don't use vformat, as apparently it's problematic. 2023-05-07 11:49:08 +00:00
dg
585f19d884 More fix. 2023-05-07 11:46:30 +00:00
dg
bb2b7d7df6 Typo fix. 2023-05-07 11:45:07 +00:00
dg
e75d218438 Attempt to run the corpus tests on github for Linux. 2023-05-07 11:44:14 +00:00
dg
7f81b554fd Try to decode the test corpus and make sure there were no decode regressions. 2023-05-07 11:37:50 +00:00
dg
2490f19a1a Add a preliminary option linter. Fix the format errors which showed up. 2023-05-07 00:29:21 +00:00
David Given
30f382bf22 Merge pull request #670 from davidgiven/dmf
Support DMF.
2023-05-07 00:15:13 +01:00
dg
ad03c187cf Merge from master. 2023-05-06 22:45:46 +00:00
David Given
06560b5a5a Merge pull request #672 from davidgiven/usb
Upgrade libusbp.
2023-05-06 23:43:37 +01:00
dg
7c40093698 Try to work around weird test failure on Windows. 2023-05-06 22:30:50 +00:00
dg
d37c75d703 Made test failures log to stdout. 2023-05-06 22:15:01 +00:00
dg
82bfb9a303 Upgrade libusbp. 2023-05-06 21:19:07 +00:00
dg
01682101a6 Update documentation. 2023-05-06 19:59:45 +00:00
dg
3c46f787b1 Always do an update when the state changes, because otherwise certain events
get lost.
2023-05-06 19:21:31 +00:00
dg
591d200283 Adjust DMF gaps. 2023-05-06 19:20:32 +00:00
dg
195534c21e Configure the 1680kB DMF format file system. 2023-05-06 18:11:24 +00:00
dg
0f9d851a29 Adjust the DMF format timings to match that of the Microsoft disk image. 2023-05-06 17:26:56 +00:00
dg
18a03baf99 Display object lengths in the flux viewer. 2023-05-06 15:34:44 +00:00
dg
5e06db4a52 Add preliminary DMF support. 2023-05-06 11:02:09 +00:00
David Given
bf78508ef7 Merge pull request #669 from davidgiven/hplif
Do some LIF enhancement.
2023-05-06 11:38:17 +01:00
dg
137c0340fb Fix month, which was off-by-one. Add custom attributes for the other LIF dirent
properties.
2023-05-06 10:20:10 +00:00
dg
e6d9de2d80 Decode timestamps into a custom property. 2023-05-06 10:16:12 +00:00
dg
d9b319eaed Add textual file types (where known) for LIF files. 2023-05-06 10:00:12 +00:00
dg
f2e713bde3 Stop trying to build for OSX 10.15, because it looks like the github runners
have been turned off.
2023-05-05 23:19:44 +00:00
David Given
94e2e494c9 Merge pull request #667 from davidgiven/options
Overhaul the options system.
2023-05-06 00:18:41 +01:00
dg
5af408e1d1 Add missing file. 2023-05-05 23:07:57 +00:00
dg
77bdc727ab Properly handle default options in the CLI. 2023-05-05 22:57:49 +00:00
dg
eb26426424 Consolidate the Victor formats into each other. 2023-05-05 22:29:26 +00:00
dg
f624bb6e5b Consolidate the Mac formats into each other. 2023-05-05 22:24:28 +00:00
dg
4a8fb9288c Remove obsolete file. 2023-05-05 22:16:11 +00:00
dg
f8f5873973 Consolidate (and typo fix) the ampro format. 2023-05-05 22:15:37 +00:00
dg
5f4903f2d1 Rename the commodore1541 options to be a bit more standard. 2023-05-05 22:07:13 +00:00
dg
b02a894663 Consolidate the Brother formats. 2023-05-05 22:03:49 +00:00
dg
510b530551 Consolidate all the IBM formats together. 2023-05-05 21:37:49 +00:00
dg
c36662205b Typo fix. 2023-05-05 21:18:27 +00:00
dg
a2ffe06792 Consolidate the MX formats into each other. 2023-05-05 21:16:26 +00:00
dg
0f56108bf5 Consolidate the Apple II formats together. 2023-05-05 21:11:06 +00:00
dg
199cefdb71 Fix radiobuttons for multiple option groups. 2023-05-05 21:06:57 +00:00
dg
1bdeaa326c Consolidate some Hewlett-Packard LIF disks together. 2023-05-05 20:46:49 +00:00
dg
cce8cfe88d Consolidate the Tiki 100 formats. 2023-05-05 20:36:39 +00:00
dg
bcfc0217dc Consolidate the Northstar formats into each other. 2023-05-05 20:29:45 +00:00
dg
7cfa080220 Merge from master. 2023-05-05 20:23:17 +00:00
dg
45ebc0f40f Consolidate the Micropolis formats into one. 2023-05-05 20:22:55 +00:00
dg
38d575eda7 Remember to set a default format. 2023-05-05 20:18:53 +00:00
dg
9cb284583b Consolidate all the Atari ST formats together. 2023-05-05 20:15:47 +00:00
dg
137b921e8d Consolidate all the Acorn formats together. 2023-05-05 20:07:44 +00:00
dg
8c876f555d Move from option exclusivity groups to option groups, which are better. 2023-05-05 19:55:56 +00:00
David Given
0988dd524b Merge 2dc649ef09 into 51fa3c5293 2023-05-04 21:10:25 +00:00
dg
2dc649ef09 Add read-only support for LIF filesystems. 2023-05-04 21:04:55 +00:00
dg
baf02cb849 Add support for the HPLIF 616kB format (contributed by Eric Rechlin). 2023-05-04 19:12:51 +00:00
David Given
51fa3c5293 Merge pull request #664 from bdwheele/ibmpc-8-sector-formats
Adding IBM PC 8-sector formats
2023-05-02 12:27:15 +01:00
Brian Wheeler
134dd6c37d Adding IBM PC 8-sector formats 2023-05-01 08:24:24 -04:00
David Given
d766e1f9a9 Merge pull request #663 from ejona86/micropolis-200ms
Micropolis: disk rotate period is 200 ms
2023-04-24 13:12:18 +02:00
Eric Anderson
d298f5b16e Micropolis: disk rotate period is 200 ms
The disks are expected to contain 100,000 bitcells, so clock_period_us
and rotational_period_ms need to align.
2023-04-23 13:54:50 -07:00
dg
ed634fbbf6 Fix build failure. 2023-04-07 16:20:32 +00:00
dg
4c776d584b Add read support for A2R v2 files. 2023-04-07 15:00:20 +00:00
David Given
c2c04862a2 Merge pull request #662 from davidgiven/scp
Adjust the SCP write logic so an unspecified TPI is treated as 96.
2023-04-07 11:25:00 +02:00
dg
ccd9539015 Adjust the SCP write logic so an unspecified TPI is treated as 96 (the usual). 2023-04-07 09:02:46 +00:00
David Given
624c597735 Merge pull request #661 from davidgiven/scp
Fix reading 48tpi SCP files.
2023-04-06 23:51:30 +02:00
dg
9300aa79c3 Read 48tpi SCP files correctly. 2023-04-06 21:49:06 +00:00
David Given
9e522c7da2 Merge ef60bfff6b into df6e47fa50 2023-04-06 18:20:31 +00:00
dg
ef60bfff6b Looks like the Roland D-20 format is the same as Brother240??? 2023-04-06 17:07:00 +00:00
dg
635c6c7bfe Add an explorer option to show raw bits. 2023-04-06 16:07:18 +00:00
David Given
df6e47fa50 Merge pull request #659 from davidgiven/n88
Add a histogram viewer to the imager. Because it's there.
2023-04-06 11:20:36 +02:00
dg
654cdcd3d1 Add a histogram viewer to the imager. Because it's there. 2023-04-06 08:59:05 +00:00
dg
a633b73e12 Add boilerplate for Roland D20 decoder. 2023-04-05 22:36:54 +00:00
David Given
ba93dae240 Merge pull request #657 from davidgiven/d20
Improve the explorer.
2023-04-05 23:11:49 +02:00
dg
8e0ca85d1e Add the histogram viewer and clock guess button. 2023-04-05 20:43:49 +00:00
dg
56a4926bd3 Factor out the clock guess code so it can be used elsewhere. 2023-04-05 19:17:37 +00:00
dg
6a2aae4ef2 Create new branch named "d20" 2023-04-05 17:47:31 +00:00
dg
ec68ce3bfa Try to fix dev release. 2023-04-04 22:43:06 +00:00
dg
a777a5be30 Typo fix. 2023-04-04 21:48:45 +00:00
David Given
b553a8b1fb Merge pull request #654 from davidgiven/search
Overhaul the GUI, to make it... gooier.
2023-04-04 22:37:37 +01:00
dg
b119e1f72d Tidying. 2023-04-04 21:02:03 +00:00
dg
7345d3e6c1 Fix merge conflict. 2023-04-04 20:23:05 +00:00
dg
e9b7a7bb52 Fix the icon background colour on Windows. 2023-04-04 20:20:32 +00:00
dg
2022732dd9 Some final tidying. 2023-04-04 20:12:21 +00:00
dg
63544647b6 Add a custom IconButton class. Rework the source icon list. Again. 2023-04-04 19:42:24 +00:00
dg
6b62585ad5 Be more intelligent about resizing the main window. 2023-04-03 22:45:25 +00:00
dg
14027210f7 Even more GUI tweaking. 2023-04-03 21:53:36 +00:00
dg
3df17b23b8 Turns out you can't unselect exclusive options in the GUI, so add an 'off' for
the Apple filesystem selection.
2023-04-03 21:53:26 +00:00
dg
cbf3f56562 The xxd binary is in the vim package. For some reason. 2023-04-02 22:59:20 +00:00
dg
1f74d9189f Make the new GUI actually work, to a certain extent. 2023-04-02 22:54:09 +00:00
dg
137658d1d6 Flesh out the source list a bit. 2023-04-02 21:34:02 +00:00
dg
5b627bd2b1 wxImageList tweak. 2023-04-02 19:54:08 +00:00
dg
38ff08885a Experiment with wxImageList. 2023-04-02 19:42:55 +00:00
dg
a89993aabb Fix the UI. 2023-04-02 19:19:16 +00:00
dg
d6353403e2 Set the icon again. 2023-04-02 17:14:59 +00:00
dg
bc62ee04c0 Some random tweaks to improve state machine look and feel. 2023-04-02 17:11:46 +00:00
dg
d3ff836b63 Put the headers in the right order to keep Windows happy. 2023-04-02 16:54:37 +00:00
dg
a7aac5578e Remove the explorer search button for now. 2023-04-02 16:41:56 +00:00
dg
add5a141d3 Actually make the new GUI model work. Mostly? 2023-04-02 12:38:12 +00:00
dg
330410ec61 Rework the GUI so that each panel is a different class. It doesn't work yet,
but the bulk of the restructuring is done.
2023-04-02 12:37:27 +00:00
dg
d0f49dcfa6 Add (but don't implement) the explorer search box. 2023-04-01 18:27:01 +00:00
David Given
124f6ab7cb Merge 471f63592e into e4204196cd 2023-04-01 13:05:59 +00:00
dg
471f63592e Typo fix. 2023-04-01 12:56:17 +00:00
dg
50e210c72f It seems the build artifact needs to be renamed for 10.15. 2023-04-01 12:40:05 +00:00
dg
d3396aa535 Use two threads for building --- seems we can do this on github. 2023-04-01 12:32:47 +00:00
dg
5ed8b838bc Another typo fix. 2023-04-01 12:15:04 +00:00
dg
d1757eacc2 Typo fix. 2023-04-01 12:14:37 +00:00
dg
0692e5f5d5 Try building for OSX 10.15 and see what happens. 2023-04-01 12:13:34 +00:00
David Given
e4204196cd Merge pull request #650 from davidgiven/flags
Allow options to be set in the GUI.
2023-03-31 23:37:17 +01:00
dg
241d4342e4 Make exclusivity groups work in the GUI. 2023-03-31 22:11:40 +00:00
dg
c04cbc631c Option name tidy. 2023-03-31 22:11:19 +00:00
dg
29b273ad7b Correctly set the path of files. 2023-03-31 22:10:47 +00:00
dg
9720dab2f6 Optimise the option radiobuttons a bit. 2023-03-31 22:10:13 +00:00
dg
bddc64a324 Merge from master. 2023-03-31 22:09:11 +00:00
David Given
eb324f14de Merge pull request #648 from davidgiven/basis
Add support for the Basis-108 Apple II clone.
2023-03-31 22:34:32 +01:00
David Given
b78a057c81 Merge branch 'master' into basis 2023-03-31 22:10:47 +01:00
dg
5751725213 Allow options to be selected in the GUI. 2023-03-31 21:09:40 +00:00
dg
f194392f99 Fix the broken AppleDOS double-sided disks. Allow access to side 1 on AppleDOS
volumes.
2023-03-31 18:24:03 +00:00
dg
fea62178af Apply what might be the right translation to the CP/M boot tracks. 2023-03-31 18:06:21 +00:00
David Given
33ef4ce8de Merge pull request #649 from davidgiven/pme
Rename the PME format to psos800.
2023-03-31 18:56:34 +01:00
dg
3728120f95 Add support for CP/M disks and filesystems. 2023-03-31 17:56:18 +00:00
dg
2944b9b3f6 Rename the PME format to psos800. 2023-03-31 17:23:33 +00:00
David Given
3430574364 Merge pull request #646 from davidgiven/pme
Add a format for the PME-68-12 SBC.
2023-03-31 11:59:00 +01:00
dg
fc5a5212c0 Merge. 2023-03-30 22:21:30 +00:00
dg
20f724ed13 Update README. 2023-03-30 22:21:00 +00:00
dg
94c1d21938 Rename the pme profile to pme68_800. 2023-03-30 22:20:29 +00:00
David Given
a1a9666b6f Fix the AppleDOS sector translation. 2023-03-30 12:26:13 +02:00
dg
0551ddc276 Add write support for Apple II 640kB disks. 2023-03-28 20:36:43 +00:00
dg
049ffd3b04 Add a profile for the Basis Apple II format. 2023-03-28 19:40:58 +00:00
dg
c28f757c5c Add a very prototype AppleDOS VFS plugin. 2023-03-28 19:29:02 +00:00
dg
91dbb86e64 Add missing files. Rename the Apple II formats. 2023-03-28 16:29:59 +00:00
dg
27a04ee22b Add initial support for the Basis-108. 2023-03-27 23:07:59 +00:00
dg@cowlark.com
5cefce9922 Fix the thread termination errors every time the directory browser is used. 2023-03-27 21:06:59 +00:00
dg
8fb4c90bed Remove the retry limit when reading from virtual flux sources, to allow flux
files with very large numbers of reads to be processed.
2023-03-27 20:14:49 +00:00
dg
81753669cc Add the 'fluxengine merge' command. 2023-03-27 20:12:46 +00:00
dg
0a0a72bcf3 Add configurable head jiggle on error, just to see if the head needs settling. 2023-03-27 18:40:35 +00:00
dg
c4a6e3e063 Fix the Windows development build artifact. 2023-03-26 23:15:20 +00:00
dg
1138e6b77f Try a different way to fetch the filedes length. 2023-03-26 21:22:11 +00:00
dg
030f9218d6 Hopefully fix the layout this time? 2023-03-26 21:17:07 +00:00
dg
2fff32e8f2 Don't return bad data which makes the GUI crash. 2023-03-26 18:52:29 +00:00
dg
5b2aa9926f Robustness and warning fixes. 2023-03-26 18:50:14 +00:00
dg
921e178e83 Tone down the bad-sector-size warning a bit. 2023-03-26 18:23:25 +00:00
dg
25ffd900c8 Realise that the PME format is HCS. Add a really basic and probably broken
PHILE filesystem reader.
2023-03-26 18:21:51 +00:00
dg
7ea4e116cc Add a warning if the configured sector size doesn't match the one on disk. 2023-03-26 16:25:40 +00:00
dg
a9daec36f5 Add prototype PME-68-12 format. 2023-03-24 21:07:48 +00:00
David Given
cebc7c6cd2 Merge 3f85c9f006 into 909f0d628b 2023-01-06 21:30:18 +00:00
dg
3f85c9f006 Adjust timings to be more correct. 2023-01-06 21:28:51 +00:00
dg
ed5efd7b87 Reenable optimisation. Again. 2023-01-06 21:28:35 +00:00
dg
4984a53bfd First hypothetically working version of the agat encoder. 2023-01-05 18:36:01 +00:00
dg
b0c77653a2 Add the boilerplate for the Agat encoder. 2023-01-05 12:04:36 +00:00
David Given
909f0d628b Merge pull request #637 from davidgiven/cpm
Fix an issue with extent handling in the CP/M file system.
2022-12-18 23:21:45 +01:00
dg
e27e3ada92 Fix an issue with extent handling in the CP/M file system; actually add a CP/M
test.
2022-12-18 22:00:52 +00:00
dg
339ea3b5a4 Move the * and + Bytes methods onto Bytes itself. 2022-12-18 22:00:16 +00:00
dg
9bd8b8915e Update format file. 2022-12-18 21:59:14 +00:00
dg
35008656a9 Remove stray logging. 2022-12-17 17:54:33 +00:00
David Given
825089458f Merge pull request #636 from davidgiven/tiki
Add support for the Tiki 100 formats.
2022-12-17 12:20:18 +01:00
dg
4a086d94b7 Add best-guess CP/M filesystem definitions for the Tiki 90kB and 800kB formats. 2022-12-17 11:01:46 +00:00
dg
0aeddf7e98 Add support for the Tiki 100 formats. 2022-12-17 10:59:30 +00:00
David Given
4922d1deb4 Merge pull request #634 from davidgiven/mac2
Fix sector skew, again.
2022-12-05 21:57:45 +01:00
dg
86d0893261 Adjust mac encoder clock to be more like the real thing. 2022-12-05 20:27:52 +00:00
dg
e4c67f18bd Fix the sector skew stuff, again. Modify the mac400 format to emit sectors in
the right order.
2022-12-05 20:22:01 +00:00
David Given
d07c5a94e1 Merge pull request #632 from davidgiven/layout
Rework the layout stuff to be more correct.
2022-12-04 21:32:17 +01:00
dg
a91dee27e7 Rework the layout stuff to be more correct. Physical skew no longer affects the
order in the resulting images.
2022-12-04 19:19:37 +00:00
David Given
e3219087c9 Merge pull request #630 from davidgiven/brother
Fix some nasty Brother bugs.
2022-12-02 22:20:32 +01:00
dg
cc9ec84aec Physical skew turns out to be horribly broken, so turn it off for the Brother
formats (the only ones which use it) until we can sort it out.
2022-12-02 20:17:42 +00:00
dg
a33cc5710c Be more rigorous about checking for invalid brother120fs filesystems --- even
though the filesystem is so simple that positively identifying it is quite
hard.
2022-12-02 19:54:58 +00:00
David Given
c2b148288a Merge pull request #628 from davidgiven/osx
Fix a bunch of OSX things.
2022-12-01 22:24:47 +01:00
David Given
a483567564 Fix the explorer to work on OSX. Lots of other vaguely OSX-related changes. 2022-12-01 21:37:59 +01:00
David Given
bd99bc6d94 Don't trust isprint() to return ascii characters, because Unicode. 2022-12-01 21:28:49 +01:00
David Given
8f79071aad Turn optimisation back on! 2022-12-01 21:28:31 +01:00
David Given
ef9071049b Merge pull request #627 from davidgiven/osx
Produce more correct OSX app bundles.
2022-12-01 20:53:58 +01:00
David Given
60e1ab8cca Dependency fix? 2022-12-01 20:21:33 +01:00
David Given
d3dbfd3154 Use dylibbundler to create possibly-working OSX app bundles. 2022-12-01 19:49:50 +01:00
David Given
ee2dffb498 Try and generate correct OSX app bundles. 2022-12-01 19:45:51 +01:00
David Given
6d9510cc65 Merge pull request #626 from elosha/macosxfixes
Library fallback path fixed & MacPorts compatible
2022-12-01 17:17:36 +01:00
Eliza Winterborn
49f0f5d000 Library fallback path fixed & MacPorts compatible
Use correct variable. Also look for libs in MacPorts' default lib path /opt/local/lib, not just HomeBrew's
2022-12-01 17:03:36 +01:00
462 changed files with 24330 additions and 16777 deletions

View File

@@ -18,17 +18,19 @@ AlwaysBreakBeforeMultilineStrings: 'true'
AlwaysBreakTemplateDeclarations: 'Yes'
BinPackArguments: 'false'
BinPackParameters: 'false'
BreakConstructorInitializers: 'AfterColon'
BreakBeforeBraces: Allman
BreakConstructorInitializers: 'AfterColon'
BreakInheritanceList: AfterColon
BreakStringLiterals: 'true'
IndentCaseLabels: 'true'
IndentWidth: '4'
ColumnLimit: '80'
ConstructorInitializerAllOnOneLineOrOnePerLine: 'true'
FixNamespaceComments: 'false'
IncludeBlocks: Preserve
IndentCaseLabels: 'true'
IndentWidth: '4'
IndentWrappedFunctionNames: 'false'
KeepEmptyLinesAtTheStartOfBlocks: 'true'
NamespaceIndentation: All
PointerAlignment: Left
ReflowComments: 'true'
SortIncludes: 'false'

View File

@@ -2,30 +2,48 @@ name: C/C++ CI
on: [push]
concurrency:
group: environment-${{ github.head_ref }}
cancel-in-progress: true
jobs:
build-linux:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine'
path: 'fluxengine'
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine-testdata'
path: 'fluxengine-testdata'
- name: apt
run: sudo apt update && sudo apt install libudev-dev libsqlite3-dev protobuf-compiler libwxgtk3.0-gtk3-dev libfmt-dev
- name: make
run: CXXFLAGS="-Wp,-D_GLIBCXX_ASSERTIONS" make
run: CXXFLAGS="-Wp,-D_GLIBCXX_ASSERTIONS" make -j2 -C fluxengine
build-macos:
build-macos-current:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine'
path: 'fluxengine'
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine-testdata'
path: 'fluxengine-testdata'
- name: brew
run: brew install sqlite pkg-config libusb protobuf wxwidgets fmt make coreutils
run: brew install sqlite pkg-config libusb protobuf wxwidgets fmt make coreutils dylibbundler libjpeg
- name: make
run: gmake
run: gmake -j2 -C fluxengine
- name: Upload build artifacts
uses: actions/upload-artifact@v2
with:
name: ${{ github.event.repository.name }}.${{ github.sha }}
path: FluxEngine.pkg
path: fluxengine.FluxEngine.pkg
build-windows:
runs-on: windows-latest
@@ -50,22 +68,32 @@ jobs:
mingw-w64-i686-zlib
mingw-w64-i686-nsis
zip
- uses: actions/checkout@v1
vim
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine'
path: 'fluxengine'
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine-testdata'
path: 'fluxengine-testdata'
- name: build
run: make
run: make -j2 -C fluxengine
- name: nsis
run: |
cd fluxengine
strip fluxengine.exe -o fluxengine-stripped.exe
strip fluxengine-gui.exe -o fluxengine-gui-stripped.exe
makensis -v2 -nocd -dOUTFILE=fluxengine-installer.exe extras/windows-installer.nsi
- name: zip
run: |
zip -9 fluxengine.zip fluxengine.exe fluxengine-gui.exe upgrade-flux-file.exe brother120tool.exe brother240tool.exe FluxEngine.cydsn/CortexM3/ARM_GCC_541/Release/FluxEngine.hex fluxengine-installer.exe
cd fluxengine
zip -9 fluxengine-windows.zip fluxengine.exe fluxengine-gui.exe upgrade-flux-file.exe brother120tool.exe brother240tool.exe FluxEngine.cydsn/CortexM3/ARM_GCC_541/Release/FluxEngine.hex fluxengine-installer.exe
- name: Upload build artifacts
uses: actions/upload-artifact@v2
with:
name: ${{ github.event.repository.name }}.${{ github.sha }}
path: fluxengine-windows.zip
path: fluxengine/fluxengine-windows.zip

View File

@@ -1,5 +1,9 @@
name: Autorelease
concurrency:
group: environment-${{ github.head_ref }}
cancel-in-progress: true
on:
push:
branches:
@@ -29,11 +33,12 @@ jobs:
mingw-w64-i686-zlib
mingw-w64-i686-nsis
zip
vim
- uses: actions/checkout@v3
- name: build
run: |
make
make -j2
- name: nsis
run: |
@@ -83,7 +88,7 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: brew
run: brew install sqlite pkg-config libusb protobuf wxwidgets fmt make coreutils
run: brew install sqlite pkg-config libusb protobuf wxwidgets fmt make coreutils dylibbundler libjpeg
- name: make
run: gmake

136
Makefile
View File

@@ -1,4 +1,4 @@
# Special Windows settings.
#Special Windows settings.
ifeq ($(OS), Windows_NT)
MINGWBIN = /mingw32/bin
@@ -16,12 +16,12 @@ ifeq ($(OS), Windows_NT)
-Wno-deprecated-enum-float-conversion \
-Wno-deprecated-enum-enum-conversion
# Required to get the gcc run-time libraries on the path.
#Required to get the gcc run - time libraries on the path.
export PATH := $(PATH):$(MINGWBIN)
EXT ?= .exe
endif
# Special OSX settings.
#Special OSX settings.
ifeq ($(shell uname),Darwin)
PLATFORM = OSX
@@ -30,14 +30,14 @@ ifeq ($(shell uname),Darwin)
-framework Foundation
endif
# Check the Make version.
#Check the Make version.
ifeq ($(findstring 4.,$(MAKE_VERSION)),)
$(error You need GNU Make 4.x for this (if you're on OSX, use gmake).)
endif
# Normal settings.
#Normal settings.
OBJDIR ?= .obj
CCPREFIX ?=
@@ -48,7 +48,7 @@ AR ?= $(CCPREFIX)ar
PKG_CONFIG ?= pkg-config
WX_CONFIG ?= wx-config
PROTOC ?= protoc
CFLAGS ?= -g -O0
CFLAGS ?= -g -O3
CXXFLAGS += -std=c++17
LDFLAGS ?=
PLATFORM ?= UNIX
@@ -65,6 +65,7 @@ CFLAGS += \
-I$(OBJDIR)/arch \
-I$(OBJDIR)/lib \
-I$(OBJDIR) \
-Wno-deprecated-declarations \
LDFLAGS += \
-lz \
@@ -77,6 +78,9 @@ define nl
endef
empty :=
space := $(empty) $(empty)
use-library = $(eval $(use-library-impl))
define use-library-impl
$1: $(call $3_LIB)
@@ -95,7 +99,7 @@ $(2): private CFLAGS += $(shell $(PKG_CONFIG) --cflags $(3))
endef
.PHONY: all binaries tests clean install install-bin
all: binaries tests
all: binaries tests docs
PROTOS = \
arch/aeslanier/aeslanier.proto \
@@ -111,6 +115,7 @@ PROTOS = \
arch/micropolis/micropolis.proto \
arch/mx/mx.proto \
arch/northstar/northstar.proto \
arch/rolandd20/rolandd20.proto \
arch/smaky6/smaky6.proto \
arch/tids990/tids990.proto \
arch/victor9k/victor9k.proto \
@@ -138,7 +143,7 @@ $(PROTO_SRCS): | $(PROTO_HDRS)
$(PROTO_OBJS): CFLAGS += $(PROTO_CFLAGS)
PROTO_LIB = $(OBJDIR)/libproto.a
$(PROTO_LIB): $(PROTO_OBJS)
PROTO_LDFLAGS = $(shell $(PKG_CONFIG) --libs protobuf) -pthread $(PROTO_LIB)
PROTO_LDFLAGS = $(shell $(PKG_CONFIG) --libs protobuf) -pthread
.PRECIOUS: $(PROTO_HDRS) $(PROTO_SRCS)
include dep/agg/build.mk
@@ -160,51 +165,90 @@ include tests/build.mk
do-encodedecodetest = $(eval $(do-encodedecodetest-impl))
define do-encodedecodetest-impl
tests: $(OBJDIR)/$1$3.flux.encodedecode
$(OBJDIR)/$1$3.flux.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
tests: $(OBJDIR)/$1$$(subst $$(space),_,$3).flux.encodedecode
$(OBJDIR)/$1$$(subst $$(space),_,$3).flux.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
@mkdir -p $(dir $$@)
@echo ENCODEDECODETEST .flux $1 $3
@echo ENCODEDECODETEST $1 flux $(FLUXENGINE_BIN) $2 $3
@scripts/encodedecodetest.sh $1 flux $(FLUXENGINE_BIN) $2 $3 > $$@
tests: $(OBJDIR)/$1$3.scp.encodedecode
$(OBJDIR)/$1$3.scp.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
tests: $(OBJDIR)/$1$$(subst $$(space),_,$3).scp.encodedecode
$(OBJDIR)/$1$$(subst $$(space),_,$3).scp.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
@mkdir -p $(dir $$@)
@echo ENCODEDECODETEST .scp $1 $3
@echo ENCODEDECODETEST $1 scp $(FLUXENGINE_BIN) $2 $3
@scripts/encodedecodetest.sh $1 scp $(FLUXENGINE_BIN) $2 $3 > $$@
endef
$(call do-encodedecodetest,amiga)
$(call do-encodedecodetest,apple2)
$(call do-encodedecodetest,atarist360)
$(call do-encodedecodetest,atarist370)
$(call do-encodedecodetest,atarist400)
$(call do-encodedecodetest,atarist410)
$(call do-encodedecodetest,atarist720)
$(call do-encodedecodetest,atarist740)
$(call do-encodedecodetest,atarist800)
$(call do-encodedecodetest,atarist820)
$(call do-encodedecodetest,bk800)
$(call do-encodedecodetest,brother120)
$(call do-encodedecodetest,brother240)
$(call do-encodedecodetest,commodore1541,scripts/commodore1541_test.textpb,--35)
$(call do-encodedecodetest,commodore1541,scripts/commodore1541_test.textpb,--40)
$(call do-encodedecodetest,commodore1581)
$(call do-encodedecodetest,cmd_fd2000)
$(call do-encodedecodetest,hp9121)
$(call do-encodedecodetest,ibm1200)
$(call do-encodedecodetest,ibm1232)
$(call do-encodedecodetest,ibm1440)
$(call do-encodedecodetest,ibm180)
$(call do-encodedecodetest,ibm360)
$(call do-encodedecodetest,ibm720)
$(call do-encodedecodetest,mac400,scripts/mac400_test.textpb)
$(call do-encodedecodetest,mac800,scripts/mac800_test.textpb)
$(call do-encodedecodetest,n88basic)
$(call do-encodedecodetest,rx50)
$(call do-encodedecodetest,tids990)
$(call do-encodedecodetest,victor9k_ss)
$(call do-encodedecodetest,victor9k_ds)
$(call do-encodedecodetest,agat,,--drive.tpi=96)
$(call do-encodedecodetest,amiga,,--drive.tpi=135)
$(call do-encodedecodetest,apple2,,--140 --drive.tpi=96)
$(call do-encodedecodetest,atarist,,--360 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--370 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--400 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--410 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--720 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--740 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--800 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--820 --drive.tpi=135)
$(call do-encodedecodetest,bk)
$(call do-encodedecodetest,brother,,--120 --drive.tpi=135)
$(call do-encodedecodetest,brother,,--240 --drive.tpi=135)
$(call do-encodedecodetest,commodore,scripts/commodore1541_test.textpb,--171 --drive.tpi=96)
$(call do-encodedecodetest,commodore,scripts/commodore1541_test.textpb,--192 --drive.tpi=96)
$(call do-encodedecodetest,commodore,,--800 --drive.tpi=135)
$(call do-encodedecodetest,commodore,,--1620 --drive.tpi=135)
$(call do-encodedecodetest,hplif,,--264 --drive.tpi=135)
$(call do-encodedecodetest,hplif,,--616 --drive.tpi=135)
$(call do-encodedecodetest,hplif,,--770 --drive.tpi=135)
$(call do-encodedecodetest,ibm,,--1200 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--1232 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--1440 --drive.tpi=135)
$(call do-encodedecodetest,ibm,,--1680 --drive.tpi=135)
$(call do-encodedecodetest,ibm,,--180 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--160 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--320 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--360 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--720_96 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--720_135 --drive.tpi=135)
$(call do-encodedecodetest,mac,scripts/mac400_test.textpb,--400 --drive.tpi=135)
$(call do-encodedecodetest,mac,scripts/mac800_test.textpb,--800 --drive.tpi=135)
$(call do-encodedecodetest,n88basic,,--drive.tpi=96)
$(call do-encodedecodetest,rx50,,--drive.tpi=96)
$(call do-encodedecodetest,tids990,,--drive.tpi=48)
$(call do-encodedecodetest,victor9k,,--612 --drive.tpi=96)
$(call do-encodedecodetest,victor9k,,--1224 --drive.tpi=96)
do-corpustest = $(eval $(do-corpustest-impl))
define do-corpustest-impl
tests: $(OBJDIR)/corpustest/$2
$(OBJDIR)/corpustest/$2: $(FLUXENGINE_BIN) \
../fluxengine-testdata/data/$1 ../fluxengine-testdata/data/$2
@mkdir -p $(OBJDIR)/corpustest
@echo CORPUSTEST $1 $2 $3
@$(FLUXENGINE_BIN) read $3 -s ../fluxengine-testdata/data/$1 -o $$@ > $$@.log
@cmp $$@ ../fluxengine-testdata/data/$2
endef
ifneq ($(wildcard ../fluxengine-testdata/data),)
$(call do-corpustest,amiga.flux,amiga.adf,amiga --drive.tpi=135)
$(call do-corpustest,atarist360.flux,atarist360.st,atarist --360 --drive.tpi=135)
$(call do-corpustest,atarist720.flux,atarist720.st,atarist --720 --drive.tpi=135)
$(call do-corpustest,brother120.flux,brother120.img,brother --120 --drive.tpi=135)
$(call do-corpustest,cmd-fd2000.flux,cmd-fd2000.img,commodore --1620 --drive.tpi=135)
$(call do-corpustest,ibm1232.flux,ibm1232.img,ibm --1232 --drive.tpi=96)
$(call do-corpustest,ibm1440.flux,ibm1440.img,ibm --1440 --drive.tpi=135)
$(call do-corpustest,mac800.flux,mac800.dsk,mac --800 --drive.tpi=135)
$(call do-corpustest,micropolis315.flux,micropolis315.img,micropolis --315 --drive.tpi=100)
$(call do-corpustest,northstar87-synthetic.flux,northstar87-synthetic.nsi,northstar --87 --drive.tpi=48)
$(call do-corpustest,northstar175-synthetic.flux,northstar175-synthetic.nsi,northstar --175 --drive.tpi=48)
$(call do-corpustest,northstar350-synthetic.flux,northstar350-synthetic.nsi,northstar --350 --drive.tpi=48)
$(call do-corpustest,victor9k_ss.flux,victor9k_ss.img,victor9k --612 --drive.tpi=96)
$(call do-corpustest,victor9k_ds.flux,victor9k_ds.img,victor9k --1224 --drive.tpi=96)
endif
$(OBJDIR)/%.a:
@mkdir -p $(dir $@)
@@ -214,7 +258,7 @@ $(OBJDIR)/%.a:
%.exe:
@mkdir -p $(dir $@)
@echo LINK $@
@$(CXX) -o $@ $^ $(LDFLAGS) $(LDFLAGS)
@$(CXX) -o $@ $(filter %.o,$^) $(filter %.a,$^) $(LDFLAGS) $(filter %.a,$^) $(LDFLAGS)
$(OBJDIR)/%.o: %.cpp
@mkdir -p $(dir $@)

110
README.md
View File

@@ -35,11 +35,11 @@ Don't believe me? Watch the demo reel!
</div>
**New!** The FluxEngine client software now works with
[GreaseWeazle](https://github.com/keirf/Greaseweazle/wiki) hardware. So, if you
[Greaseweazle](https://github.com/keirf/Greaseweazle/wiki) hardware. So, if you
can't find a PSoC5 development kit, or don't want to use the Cypress Windows
tools for programming it, you can use one of these instead. Very nearly all
FluxEngine features are available with the GreaseWeazle and it works out-of-the
box. See the [dedicated GreaseWeazle documentation page](doc/greaseweazle.md)
FluxEngine features are available with the Greaseweazle and it works out-of-the
box. See the [dedicated Greaseweazle documentation page](doc/greaseweazle.md)
for more information.
Where?
@@ -65,7 +65,7 @@ following friendly articles:
- [Using a FluxEngine](doc/using.md) ∾ what to do with your new hardware ∾
flux files and image files ∾ knowing what you're doing
- [Using GreaseWeazle hardware with the FluxEngine client
- [Using Greaseweazle hardware with the FluxEngine client
software](doc/greaseweazle.md) ∾ what works ∾ what doesn't work ∾ where to
go for help
@@ -88,62 +88,60 @@ Which?
The current support state is as follows.
Dinosaurs (🦖) have yet to be observed in real life --- I've written the
decoder based on Kryoflux (or other) dumps I've found. I don't (yet) have
real, physical disks in my hand to test the capture process.
Dinosaurs (🦖) have yet to be observed in real life --- I've written the encoder
and/or decoder based on Kryoflux (or other) dumps I've found. I don't (yet) have
real, physical disks in my hand to test the capture process, or hardware to
verify that written disks work.
Unicorns (🦄) are completely real --- this means that I've read actual,
physical disks with these formats and so know they work (or had reports from
people who've had it work).
Unicorns (🦄) are completely real --- this means that I've read actual, physical
disks with these formats and/or written real, physical disks and then used them
on real hardware, and so know they work (or had reports from people who've had
it work).
### Old disk formats
If a filesystem is listed, this means that FluxEngine natively supports that
particular filesystem and can read (and sometimes write, support varies) files
directly from disks, flux files or disk images. Some formats have multiple
choices because they can store multiple types of file system.
| Format | Read? | Write? | Notes |
|:------------------------------------------|:-----:|:------:|-------|
| [IBM PC compatible](doc/disk-ibm.md) | 🦄 | 🦄 | and compatibles (like the Atari ST) |
| [Atari ST](doc/disk-atarist.md) | 🦄 | 🦄 | technically the same as IBM, almost |
| [Acorn ADFS](doc/disk-acornadfs.md) | 🦄 | 🦖* | single- and double- sided |
| [Acorn DFS](doc/disk-acorndfs.md) | 🦄 | 🦖* | |
| [Ampro Little Board](doc/disk-ampro.md) | 🦖 | 🦖* | |
| [Agat](doc/disk-agat.md) | 🦖 | | Soviet Union Apple-II-like computer |
| [Apple II](doc/disk-apple2.md) | 🦄 | 🦄 | |
| [Amiga](doc/disk-amiga.md) | 🦄 | 🦄 | |
| [Commodore 64 1541/1581](doc/disk-c64.md) | 🦄 | 🦄 | and probably the other formats |
| [Brother 120kB](doc/disk-brother.md) | 🦄 | 🦄 | |
| [Brother 240kB](doc/disk-brother.md) | 🦄 | 🦄 | |
| [Brother FB-100](doc/disk-fb100.md) | 🦖 | | Tandy Model 100, Husky Hunter, knitting machines |
| [Elektronika BK](doc/disk-bd.md) | 🦄 | 🦄 | Soviet Union PDP-11 clone |
| [Macintosh 400kB/800kB](doc/disk-macintosh.md) | 🦄 | 🦄 | |
| [NEC PC-98](doc/disk-ibm.md) | 🦄 | 🦄 | trimode drive not required |
| [Sharp X68000](doc/disk-ibm.md) | 🦄 | 🦄 | |
| [Smaky 6](doc/disk-smaky6.md) | 🦖 | | 5.25" hard sectored |
| [TRS-80](doc/disk-trs80.md) | 🦖 | 🦖* | a minor variation of the IBM scheme |
<!-- FORMATSSTART -->
<!-- This section is automatically generated. Do not edit. -->
| Profile | Format | Read? | Write? | Filesystem? |
|:--------|:-------|:-----:|:------:|:------------|
| [`acornadfs`](doc/disk-acornadfs.md) | Acorn ADFS: BBC Micro, Archimedes | 🦖 | | |
| [`acorndfs`](doc/disk-acorndfs.md) | Acorn DFS: Acorn Atom, BBC Micro series | 🦄 | | ACORNDFS |
| [`aeslanier`](doc/disk-aeslanier.md) | AES Lanier "No Problem": 616kB 5.25" 77-track SSDD hard sectored | 🦖 | | |
| [`agat`](doc/disk-agat.md) | Agat: 840kB 5.25" 80-track DS | 🦖 | 🦖 | |
| [`amiga`](doc/disk-amiga.md) | Amiga: 880kB 3.5" DSDD | 🦄 | 🦄 | AMIGAFFS |
| [`ampro`](doc/disk-ampro.md) | Ampro Little Board: CP/M | 🦖 | | CPMFS |
| [`apple2`](doc/disk-apple2.md) | Apple II: Prodos, Appledos, and CP/M | 🦄 | 🦄 | APPLEDOS CPMFS PRODOS |
| [`atarist`](doc/disk-atarist.md) | Atari ST: Almost PC compatible | 🦄 | 🦄 | |
| [`bk`](doc/disk-bk.md) | BK: 800kB 5.25"/3.5" 80-track 10-sector DSDD | 🦖 | 🦖 | |
| [`brother`](doc/disk-brother.md) | Brother word processors: GCR family | 🦄 | 🦄 | BROTHER120 FATFS |
| [`commodore`](doc/disk-commodore.md) | Commodore: 1541, 1581, 8050 and variations | 🦄 | 🦄 | CBMFS |
| [`eco1`](doc/disk-eco1.md) | VDS Eco1: CP/M; 1210kB 77-track mixed format DSHD | 🦖 | | CPMFS |
| [`epsonpf10`](doc/disk-epsonpf10.md) | Epson PF-10: CP/M; 3.5" 40-track DSDD | 🦖 | | CPMFS |
| [`f85`](doc/disk-f85.md) | Durango F85: 461kB 5.25" 77-track SS | 🦖 | | |
| [`fb100`](doc/disk-fb100.md) | Brother FB-100: 100kB 3.5" 40-track SSSD | 🦖 | | |
| [`hplif`](doc/disk-hplif.md) | Hewlett-Packard LIF: a variety of disk formats used by HP | 🦄 | 🦄 | LIF |
| [`ibm`](doc/disk-ibm.md) | IBM PC: Generic PC 3.5"/5.25" disks | 🦄 | 🦄 | FATFS |
| [`icl30`](doc/disk-icl30.md) | ICL Model 30: CP/M; 263kB 35-track DSSD | 🦖 | | CPMFS |
| [`mac`](doc/disk-mac.md) | Macintosh: 400kB/800kB 3.5" GCR | 🦄 | 🦄 | MACHFS |
| [`micropolis`](doc/disk-micropolis.md) | Micropolis: 100tpi MetaFloppy disks | 🦄 | 🦄 | |
| [`mx`](doc/disk-mx.md) | DVK MX: Soviet-era PDP-11 clone | 🦖 | | |
| [`n88basic`](doc/disk-n88basic.md) | N88-BASIC: PC8800/PC98 5.25" 77-track 26-sector DSHD | 🦄 | 🦄 | |
| [`northstar`](doc/disk-northstar.md) | Northstar: 5.25" hard sectored | 🦄 | 🦄 | |
| [`psos`](doc/disk-psos.md) | pSOS: 800kB DSDD with PHILE | 🦄 | 🦄 | PHILE |
| [`rolandd20`](doc/disk-rolandd20.md) | Roland D20: 3.5" electronic synthesiser disks | 🦖 | | |
| [`rx50`](doc/disk-rx50.md) | Digital RX50: 400kB 5.25" 80-track 10-sector SSDD | 🦖 | 🦖 | |
| [`smaky6`](doc/disk-smaky6.md) | Smaky 6: 308kB 5.25" 77-track 16-sector SSDD, hard sectored | 🦖 | | SMAKY6 |
| [`tids990`](doc/disk-tids990.md) | Texas Instruments DS990: 1126kB 8" DSSD | 🦖 | 🦖 | |
| [`tiki`](doc/disk-tiki.md) | Tiki 100: CP/M | | | CPMFS |
| [`victor9k`](doc/disk-victor9k.md) | Victor 9000 / Sirius One: 1224kB 5.25" DSDD GCR | 🦖 | 🦖 | |
| [`zilogmcz`](doc/disk-zilogmcz.md) | Zilog MCZ: 320kB 8" 77-track SSSD hard-sectored | 🦖 | | ZDOS |
{: .datatable }
`*`: these formats are variations of the generic IBM format, and since the
IBM writer is completely generic, it should be configurable for these
formats... theoretically. I don't have the hardware to try it.
### Even older disk formats
These formats are for particularly old, weird architectures, even by the
standards of floppy disks. They've largely been implemented from single flux
files with no access to physical hardware. Typically the reads were pretty
bad and I've had to make a number of guesses as to how things work. They do,
at least, check the CRC so what data's there is probably good.
| Format | Read? | Write? | Notes |
|:-----------------------------------------|:-----:|:------:|-------|
| [AES Superplus / No Problem](doc/disk-aeslanier.md) | 🦖 | | hard sectors! |
| [Durango F85](doc/disk-durangof85.md) | 🦖 | | 5.25" |
| [DVK MX](doc/disk-mx.md) | 🦖 | | Soviet PDP-11 clone |
| [VDS Eco1](doc/disk-eco1.md) | 🦖 | | 8" mixed format |
| [Micropolis](doc/disk-micropolis.md) | 🦄 | | Micropolis 100tpi drives |
| [Northstar](doc/disk-northstar.md) | 🦖 | 🦖 | 5.25" hard sectors |
| [TI DS990 FD1000](doc/disk-tids990.md) | 🦄 | 🦄 | 8" |
| [Victor 9000](doc/disk-victor9k.md) | 🦖 | | 5.25" GCR encoded |
| [Zilog MCZ](doc/disk-zilogmcz.md) | 🦖 | | 8" _and_ hard sectors |
{: .datatable }
<!-- FORMATSEND -->
### Notes
@@ -262,5 +260,3 @@ __Important:__ Because of all these exceptions, if you distribute the
FluxEngine package as a whole, you must comply with the terms of _all_ of the
licensing terms. This means that __effectively the FluxEngine package is
distributable under the terms of the GPL 2.0__.

View File

@@ -2,9 +2,10 @@
#define AESLANIER_H
#define AESLANIER_RECORD_SEPARATOR 0x55555122
#define AESLANIER_SECTOR_LENGTH 256
#define AESLANIER_RECORD_SIZE (AESLANIER_SECTOR_LENGTH + 5)
#define AESLANIER_SECTOR_LENGTH 256
#define AESLANIER_RECORD_SIZE (AESLANIER_SECTOR_LENGTH + 5)
extern std::unique_ptr<Decoder> createAesLanierDecoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createAesLanierDecoder(
const DecoderProto& config);
#endif

View File

@@ -11,56 +11,54 @@
static const FluxPattern SECTOR_PATTERN(32, AESLANIER_RECORD_SEPARATOR);
/* This is actually M2FM, rather than MFM, but it our MFM/FM decoder copes fine with it. */
/* This is actually M2FM, rather than MFM, but it our MFM/FM decoder copes fine
* with it. */
class AesLanierDecoder : public Decoder
{
public:
AesLanierDecoder(const DecoderProto& config):
Decoder(config)
{}
AesLanierDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_PATTERN);
}
{
return seekToPattern(SECTOR_PATTERN);
}
void decodeSectorRecord() override
{
/* Skip ID mark (we know it's a AESLANIER_RECORD_SEPARATOR). */
{
/* Skip ID mark (we know it's a AESLANIER_RECORD_SEPARATOR). */
readRawBits(16);
readRawBits(16);
const auto& rawbits = readRawBits(AESLANIER_RECORD_SIZE*16);
const auto& bytes = decodeFmMfm(rawbits).slice(0, AESLANIER_RECORD_SIZE);
const auto& reversed = bytes.reverseBits();
const auto& rawbits = readRawBits(AESLANIER_RECORD_SIZE * 16);
const auto& bytes =
decodeFmMfm(rawbits).slice(0, AESLANIER_RECORD_SIZE);
const auto& reversed = bytes.reverseBits();
_sector->logicalTrack = reversed[1];
_sector->logicalSide = 0;
_sector->logicalSector = reversed[2];
_sector->logicalTrack = reversed[1];
_sector->logicalSide = 0;
_sector->logicalSector = reversed[2];
/* Check header 'checksum' (which seems far too simple to mean much). */
/* Check header 'checksum' (which seems far too simple to mean much). */
{
uint8_t wanted = reversed[3];
uint8_t got = reversed[1] + reversed[2];
if (wanted != got)
return;
}
{
uint8_t wanted = reversed[3];
uint8_t got = reversed[1] + reversed[2];
if (wanted != got)
return;
}
/* Check data checksum, which also includes the header and is
* significantly better. */
/* Check data checksum, which also includes the header and is
* significantly better. */
_sector->data = reversed.slice(1, AESLANIER_SECTOR_LENGTH);
uint16_t wanted = reversed.reader().seek(0x101).read_le16();
uint16_t got = crc16ref(MODBUS_POLY_REF, _sector->data);
_sector->status = (wanted == got) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = reversed.slice(1, AESLANIER_SECTOR_LENGTH);
uint16_t wanted = reversed.reader().seek(0x101).read_le16();
uint16_t got = crc16ref(MODBUS_POLY_REF, _sector->data);
_sector->status = (wanted == got) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createAesLanierDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new AesLanierDecoder(config));
return std::unique_ptr<Decoder>(new AesLanierDecoder(config));
}

View File

@@ -8,15 +8,13 @@ uint8_t agatChecksum(const Bytes& bytes)
{
uint16_t checksum = 0;
for (uint8_t b : bytes)
{
if (checksum > 0xff)
checksum = (checksum + 1) & 0xff;
for (uint8_t b : bytes)
{
if (checksum > 0xff)
checksum = (checksum + 1) & 0xff;
checksum += b;
}
checksum += b;
}
return checksum & 0xff;
return checksum & 0xff;
}

View File

@@ -3,9 +3,17 @@
#define AGAT_SECTOR_SIZE 256
static constexpr uint64_t SECTOR_ID = 0x8924555549111444;
static constexpr uint64_t DATA_ID = 0x8924555514444911;
class Encoder;
class EncoderProto;
class Decoder;
class DecoderProto;
extern std::unique_ptr<Decoder> createAgatDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createAgatEncoder(const EncoderProto& config);
extern uint8_t agatChecksum(const Bytes& bytes);
#endif

View File

@@ -1,5 +1,19 @@
syntax = "proto2";
import "lib/common.proto";
message AgatDecoderProto {}
message AgatEncoderProto {
optional double target_clock_period_us = 1
[default=2.00, (help)="Data clock period of target format."];
optional double target_rotational_period_ms = 2
[default=200.0, (help)="Rotational period of target format."];
optional int32 post_index_gap_bytes = 3
[default=40, (help)="Post-index gap before first sector header."];
optional int32 pre_sector_gap_bytes = 4
[default=11, (help)="Gap before each sector header."];
optional int32 pre_data_gap_bytes = 5
[default=2, (help)="Gap before each sector data record."];
}

View File

@@ -9,13 +9,14 @@
#include "fmt/format.h"
#include <string.h>
// clang-format off
/*
* data: X X X X X X X X X - - X - X - X - X X - X - X - = 0xff956a
* flux: 01 01 01 01 01 01 01 01 01 00 10 01 00 01 00 01 00 01 01 00 01 00 01 00 = 0x555549111444
*
* data: X X X X X X X X - X X - X - X - X - - X - X - X = 0xff6a95
* flux: 01 01 01 01 01 01 01 01 00 01 01 00 01 00 01 00 01 00 10 01 00 01 00 01 = 0x555514444911
*
*
* Each pattern is prefixed with this one:
*
* data: - - - X - - X - = 0x12
@@ -30,68 +31,59 @@
* 0100010010010010 = MFM encoded
* 1000100100100100 = with trailing zero
* - - - X - - X - = effective bitstream = 0x12
*
*/
// clang-format on
static const uint64_t SECTOR_ID = 0x8924555549111444;
static const FluxPattern SECTOR_PATTERN(64, SECTOR_ID);
static const uint64_t DATA_ID = 0x8924555514444911;
static const FluxPattern DATA_PATTERN(64, DATA_ID);
static const FluxMatchers ALL_PATTERNS = {
&SECTOR_PATTERN,
&DATA_PATTERN
};
static const FluxMatchers ALL_PATTERNS = {&SECTOR_PATTERN, &DATA_PATTERN};
class AgatDecoder : public Decoder
{
public:
AgatDecoder(const DecoderProto& config):
Decoder(config)
{}
AgatDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ALL_PATTERNS);
}
{
return seekToPattern(ALL_PATTERNS);
}
void decodeSectorRecord() override
{
if (readRaw64() != SECTOR_ID)
return;
{
if (readRaw64() != SECTOR_ID)
return;
auto bytes = decodeFmMfm(readRawBits(64)).slice(0, 4);
if (bytes[3] != 0x5a)
return;
auto bytes = decodeFmMfm(readRawBits(64)).slice(0, 4);
if (bytes[3] != 0x5a)
return;
_sector->logicalTrack = bytes[1] >> 1;
_sector->logicalSector = bytes[2];
_sector->logicalSide = bytes[1] & 1;
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
_sector->logicalTrack = bytes[1] >> 1;
_sector->logicalSector = bytes[2];
_sector->logicalSide = bytes[1] & 1;
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
if (readRaw64() != DATA_ID)
return;
void decodeDataRecord() override
{
if (readRaw64() != DATA_ID)
return;
Bytes bytes = decodeFmMfm(readRawBits((AGAT_SECTOR_SIZE+2)*16)).slice(0, AGAT_SECTOR_SIZE+2);
Bytes bytes = decodeFmMfm(readRawBits((AGAT_SECTOR_SIZE + 2) * 16))
.slice(0, AGAT_SECTOR_SIZE + 2);
if (bytes[AGAT_SECTOR_SIZE+1] != 0x5a)
return;
if (bytes[AGAT_SECTOR_SIZE + 1] != 0x5a)
return;
_sector->data = bytes.slice(0, AGAT_SECTOR_SIZE);
uint8_t wantChecksum = bytes[AGAT_SECTOR_SIZE];
uint8_t gotChecksum = agatChecksum(_sector->data);
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = bytes.slice(0, AGAT_SECTOR_SIZE);
uint8_t wantChecksum = bytes[AGAT_SECTOR_SIZE];
uint8_t gotChecksum = agatChecksum(_sector->data);
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createAgatDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new AgatDecoder(config));
return std::unique_ptr<Decoder>(new AgatDecoder(config));
}

118
arch/agat/encoder.cc Normal file
View File

@@ -0,0 +1,118 @@
#include "lib/globals.h"
#include "lib/decoders/decoders.h"
#include "lib/encoders/encoders.h"
#include "agat.h"
#include "lib/crc.h"
#include "lib/readerwriter.h"
#include "lib/image.h"
#include "lib/layout.h"
#include "arch/agat/agat.pb.h"
#include "lib/encoders/encoders.pb.h"
class AgatEncoder : public Encoder
{
public:
AgatEncoder(const EncoderProto& config):
Encoder(config),
_config(config.agat())
{
}
private:
void writeRawBits(uint64_t data, int width)
{
_cursor += width;
_lastBit = data & 1;
for (int i = 0; i < width; i++)
{
unsigned pos = _cursor - i - 1;
if (pos < _bits.size())
_bits[pos] = data & 1;
data >>= 1;
}
}
void writeBytes(const Bytes& bytes)
{
encodeMfm(_bits, _cursor, bytes, _lastBit);
}
void writeByte(uint8_t byte)
{
Bytes b;
b.writer().write_8(byte);
writeBytes(b);
}
void writeFillerRawBytes(int count, uint16_t byte)
{
for (int i = 0; i < count; i++)
writeRawBits(byte, 16);
};
void writeFillerBytes(int count, uint8_t byte)
{
Bytes b{byte};
for (int i = 0; i < count; i++)
writeBytes(b);
};
public:
std::unique_ptr<Fluxmap> encode(std::shared_ptr<const TrackInfo>& trackInfo,
const std::vector<std::shared_ptr<const Sector>>& sectors,
const Image& image) override
{
auto trackLayout = Layout::getLayoutOfTrack(
trackInfo->logicalTrack, trackInfo->logicalSide);
double clockRateUs = _config.target_clock_period_us() / 2.0;
int bitsPerRevolution =
(_config.target_rotational_period_ms() * 1000.0) / clockRateUs;
_bits.resize(bitsPerRevolution);
_cursor = 0;
writeFillerRawBytes(_config.post_index_gap_bytes(), 0xaaaa);
for (const auto& sector : sectors)
{
/* Header */
writeFillerRawBytes(_config.pre_sector_gap_bytes(), 0xaaaa);
writeRawBits(SECTOR_ID, 64);
writeByte(0x5a);
writeByte((sector->logicalTrack << 1) | sector->logicalSide);
writeByte(sector->logicalSector);
writeByte(0x5a);
/* Data */
writeFillerRawBytes(_config.pre_data_gap_bytes(), 0xaaaa);
auto data = sector->data.slice(0, AGAT_SECTOR_SIZE);
writeRawBits(DATA_ID, 64);
writeBytes(data);
writeByte(agatChecksum(data));
writeByte(0x5a);
}
if (_cursor >= _bits.size())
error("track data overrun");
fillBitmapTo(_bits, _cursor, _bits.size(), {true, false});
auto fluxmap = std::make_unique<Fluxmap>();
fluxmap->appendBits(_bits,
calculatePhysicalClockPeriod(_config.target_clock_period_us() * 1e3,
_config.target_rotational_period_ms() * 1e6));
return fluxmap;
}
private:
const AgatEncoderProto& _config;
uint32_t _cursor;
bool _lastBit;
std::vector<bool> _bits;
};
std::unique_ptr<Encoder> createAgatEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new AgatEncoder(config));
}

View File

@@ -18,61 +18,61 @@ uint32_t amigaChecksum(const Bytes& bytes)
static uint8_t everyother(uint16_t x)
{
/* aabb ccdd eeff gghh */
x &= 0x6666; /* 0ab0 0cd0 0ef0 0gh0 */
x >>= 1; /* 00ab 00cd 00ef 00gh */
x |= x << 2; /* abab cdcd efef ghgh */
x &= 0x3c3c; /* 00ab cd00 00ef gh00 */
x >>= 2; /* 0000 abcd 0000 efgh */
x |= x >> 4; /* 0000 abcd abcd efgh */
return x;
/* aabb ccdd eeff gghh */
x &= 0x6666; /* 0ab0 0cd0 0ef0 0gh0 */
x >>= 1; /* 00ab 00cd 00ef 00gh */
x |= x << 2; /* abab cdcd efef ghgh */
x &= 0x3c3c; /* 00ab cd00 00ef gh00 */
x >>= 2; /* 0000 abcd 0000 efgh */
x |= x >> 4; /* 0000 abcd abcd efgh */
return x;
}
Bytes amigaInterleave(const Bytes& input)
{
Bytes output;
ByteWriter bw(output);
Bytes output;
ByteWriter bw(output);
/* Write all odd bits. (Numbering starts at 0...) */
/* Write all odd bits. (Numbering starts at 0...) */
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0xaaaa; /* a0b0 c0d0 e0f0 g0h0 */
x |= x >> 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0xaaaa; /* a0b0 c0d0 e0f0 g0h0 */
x |= x >> 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
/* Write all even bits. */
/* Write all even bits. */
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0x5555; /* 0a0b 0c0d 0e0f 0g0h */
x |= x << 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0x5555; /* 0a0b 0c0d 0e0f 0g0h */
x |= x << 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
return output;
return output;
}
Bytes amigaDeinterleave(const uint8_t*& input, size_t len)
{
assert(!(len & 1));
const uint8_t* odds = &input[0];
const uint8_t* evens = &input[len/2];
const uint8_t* evens = &input[len / 2];
Bytes output;
ByteWriter bw(output);
for (size_t i=0; i<len/2; i++)
for (size_t i = 0; i < len / 2; i++)
{
uint8_t o = *odds++;
uint8_t e = *evens++;
@@ -81,11 +81,15 @@ Bytes amigaDeinterleave(const uint8_t*& input, size_t len)
* http://graphics.stanford.edu/~seander/bithacks.html#InterleaveBMN
*/
uint16_t result =
(((e * 0x0101010101010101ULL & 0x8040201008040201ULL)
* 0x0102040810204081ULL >> 49) & 0x5555) |
(((o * 0x0101010101010101ULL & 0x8040201008040201ULL)
* 0x0102040810204081ULL >> 48) & 0xAAAA);
(((e * 0x0101010101010101ULL & 0x8040201008040201ULL) *
0x0102040810204081ULL >>
49) &
0x5555) |
(((o * 0x0101010101010101ULL & 0x8040201008040201ULL) *
0x0102040810204081ULL >>
48) &
0xAAAA);
bw.write_be16(result);
}
@@ -95,6 +99,6 @@ Bytes amigaDeinterleave(const uint8_t*& input, size_t len)
Bytes amigaDeinterleave(const Bytes& input)
{
const uint8_t* ptr = input.cbegin();
return amigaDeinterleave(ptr, input.size());
const uint8_t* ptr = input.cbegin();
return amigaDeinterleave(ptr, input.size());
}

View File

@@ -11,70 +11,74 @@
#include <string.h>
#include <algorithm>
/*
/*
* Amiga disks use MFM but it's not quite the same as IBM MFM. They only use
* a single type of record with a different marker byte.
*
*
* See the big comment in the IBM MFM decoder for the gruesome details of how
* MFM works.
*/
static const FluxPattern SECTOR_PATTERN(48, AMIGA_SECTOR_RECORD);
class AmigaDecoder : public Decoder
{
public:
AmigaDecoder(const DecoderProto& config):
Decoder(config),
_config(config.amiga())
{}
AmigaDecoder(const DecoderProto& config):
Decoder(config),
_config(config.amiga())
{
}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_PATTERN);
}
{
return seekToPattern(SECTOR_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw48() != AMIGA_SECTOR_RECORD)
return;
const auto& rawbits = readRawBits(AMIGA_RECORD_SIZE*16);
if (rawbits.size() < (AMIGA_RECORD_SIZE*16))
return;
const auto& rawbytes = toBytes(rawbits).slice(0, AMIGA_RECORD_SIZE*2);
const auto& bytes = decodeFmMfm(rawbits).slice(0, AMIGA_RECORD_SIZE);
{
if (readRaw48() != AMIGA_SECTOR_RECORD)
return;
const uint8_t* ptr = bytes.begin();
const auto& rawbits = readRawBits(AMIGA_RECORD_SIZE * 16);
if (rawbits.size() < (AMIGA_RECORD_SIZE * 16))
return;
const auto& rawbytes = toBytes(rawbits).slice(0, AMIGA_RECORD_SIZE * 2);
const auto& bytes = decodeFmMfm(rawbits).slice(0, AMIGA_RECORD_SIZE);
Bytes header = amigaDeinterleave(ptr, 4);
Bytes recoveryinfo = amigaDeinterleave(ptr, 16);
const uint8_t* ptr = bytes.begin();
_sector->logicalTrack = header[1] >> 1;
_sector->logicalSide = header[1] & 1;
_sector->logicalSector = header[2];
Bytes header = amigaDeinterleave(ptr, 4);
Bytes recoveryinfo = amigaDeinterleave(ptr, 16);
uint32_t wantedheaderchecksum = amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotheaderchecksum = amigaChecksum(rawbytes.slice(0, 40));
if (gotheaderchecksum != wantedheaderchecksum)
return;
_sector->logicalTrack = header[1] >> 1;
_sector->logicalSide = header[1] & 1;
_sector->logicalSector = header[2];
uint32_t wanteddatachecksum = amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotdatachecksum = amigaChecksum(rawbytes.slice(56, 1024));
uint32_t wantedheaderchecksum =
amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotheaderchecksum = amigaChecksum(rawbytes.slice(0, 40));
if (gotheaderchecksum != wantedheaderchecksum)
return;
Bytes data;
data.writer().append(amigaDeinterleave(ptr, 512)).append(recoveryinfo);
_sector->data = data;
_sector->status = (gotdatachecksum == wanteddatachecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
uint32_t wanteddatachecksum =
amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotdatachecksum = amigaChecksum(rawbytes.slice(56, 1024));
Bytes data;
data.writer().append(amigaDeinterleave(ptr, 512)).append(recoveryinfo);
_sector->data = data;
_sector->status = (gotdatachecksum == wanteddatachecksum)
? Sector::OK
: Sector::BAD_CHECKSUM;
}
private:
const AmigaDecoderProto& _config;
nanoseconds_t _clock;
const AmigaDecoderProto& _config;
nanoseconds_t _clock;
};
std::unique_ptr<Decoder> createAmigaDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new AmigaDecoder(config));
return std::unique_ptr<Decoder>(new AmigaDecoder(config));
}

View File

@@ -59,7 +59,7 @@ static void write_sector(std::vector<bool>& bits,
const std::shared_ptr<const Sector>& sector)
{
if ((sector->data.size() != 512) && (sector->data.size() != 528))
Error() << "unsupported sector size --- you must pick 512 or 528";
error("unsupported sector size --- you must pick 512 or 528");
uint32_t checksum = 0;
@@ -114,7 +114,8 @@ public:
const std::vector<std::shared_ptr<const Sector>>& sectors,
const Image& image) override
{
/* Number of bits for one nominal revolution of a real 200ms Amiga disk. */
/* Number of bits for one nominal revolution of a real 200ms Amiga disk.
*/
int bitsPerRevolution = 200e3 / _config.clock_rate_us();
std::vector<bool> bits(bitsPerRevolution);
unsigned cursor = 0;
@@ -129,13 +130,12 @@ public:
write_sector(bits, cursor, sector);
if (cursor >= bits.size())
Error() << "track data overrun";
error("track data overrun");
fillBitmapTo(bits, cursor, bits.size(), {true, false});
auto fluxmap = std::make_unique<Fluxmap>();
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(
_config.clock_rate_us() * 1e3, 200e6));
calculatePhysicalClockPeriod(_config.clock_rate_us() * 1e3, 200e6));
return fluxmap;
}

View File

@@ -5,16 +5,15 @@
#include "decoders/decoders.h"
#include "encoders/encoders.h"
#define APPLE2_SECTOR_RECORD 0xd5aa96
#define APPLE2_DATA_RECORD 0xd5aaad
#define APPLE2_SECTOR_RECORD 0xd5aa96
#define APPLE2_DATA_RECORD 0xd5aaad
#define APPLE2_SECTOR_LENGTH 256
#define APPLE2_SECTOR_LENGTH 256
#define APPLE2_ENCODED_SECTOR_LENGTH 342
#define APPLE2_SECTORS 16
#define APPLE2_SECTORS 16
extern std::unique_ptr<Decoder> createApple2Decoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createApple2Encoder(const EncoderProto& config);
#endif

View File

@@ -2,7 +2,10 @@ syntax = "proto2";
import "lib/common.proto";
message Apple2DecoderProto {}
message Apple2DecoderProto {
optional uint32 side_one_track_offset = 1
[ default = 0, (help) = "offset to apply to track numbers on side 1" ];
}
message Apple2EncoderProto
{
@@ -13,4 +16,7 @@ message Apple2EncoderProto
/* Apple II disk drives spin at 300rpm. */
optional double rotational_period_ms = 2
[ default = 200.0, (help) = "rotational period on the real device" ];
optional uint32 side_one_track_offset = 3
[ default = 0, (help) = "offset to apply to track numbers on side 1" ];
}

View File

@@ -5,6 +5,8 @@
#include "decoders/decoders.h"
#include "sector.h"
#include "apple2.h"
#include "arch/apple2/apple2.pb.h"
#include "lib/decoders/decoders.pb.h"
#include "bytes.h"
#include "fmt/format.h"
#include <string.h>
@@ -12,22 +14,25 @@
const FluxPattern SECTOR_RECORD_PATTERN(24, APPLE2_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(24, APPLE2_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
/* This is extremely inspired by the MESS implementation, written by Nathan Woods
* and R. Belmont: https://github.com/mamedev/mame/blob/7914a6083a3b3a8c243ae6c3b8cb50b023f21e0e/src/lib/formats/ap2_dsk.cpp
/* This is extremely inspired by the MESS implementation, written by Nathan
* Woods and R. Belmont:
* https://github.com/mamedev/mame/blob/7914a6083a3b3a8c243ae6c3b8cb50b023f21e0e/src/lib/formats/ap2_dsk.cpp
*/
static Bytes decode_crazy_data(const uint8_t* inp, Sector::Status& status)
{
@@ -47,9 +52,11 @@ static Bytes decode_crazy_data(const uint8_t* inp, Sector::Status& status)
{
/* 3 * 2 bit */
output[i + 0] = ((checksum >> 1) & 0x01) | ((checksum << 1) & 0x02);
output[i + 86] = ((checksum >> 3) & 0x01) | ((checksum >> 1) & 0x02);
output[i + 86] =
((checksum >> 3) & 0x01) | ((checksum >> 1) & 0x02);
if ((i + 172) < APPLE2_SECTOR_LENGTH)
output[i + 172] = ((checksum >> 5) & 0x01) | ((checksum >> 3) & 0x02);
output[i + 172] =
((checksum >> 5) & 0x01) | ((checksum >> 3) & 0x02);
}
}
@@ -67,88 +74,102 @@ static uint8_t combine(uint16_t word)
class Apple2Decoder : public Decoder
{
public:
Apple2Decoder(const DecoderProto& config):
Decoder(config)
{}
Apple2Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw24() != APPLE2_SECTOR_RECORD)
return;
{
if (readRaw24() != APPLE2_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
auto header = toBytes(readRawBits(8*8)).slice(0, 8);
ByteReader br(header);
auto header = toBytes(readRawBits(8 * 8)).slice(0, 8);
ByteReader br(header);
uint8_t volume = combine(br.read_be16());
_sector->logicalTrack = combine(br.read_be16());
_sector->logicalSector = combine(br.read_be16());
uint8_t checksum = combine(br.read_be16());
uint8_t volume = combine(br.read_be16());
_sector->logicalTrack = combine(br.read_be16());
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = combine(br.read_be16());
uint8_t checksum = combine(br.read_be16());
// If the checksum is correct, upgrade the sector from MISSING
// to DATA_MISSING in anticipation of its data record
if (checksum == (volume ^ _sector->logicalTrack ^ _sector->logicalSector))
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
// If the checksum is correct, upgrade the sector from MISSING
// to DATA_MISSING in anticipation of its data record
if (checksum ==
(volume ^ _sector->logicalTrack ^ _sector->logicalSector))
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
if (_sector->logicalSide == 1)
_sector->logicalTrack -= _config.apple2().side_one_track_offset();
/* Sanity check. */
if (_sector->logicalTrack > 100)
{
_sector->status = Sector::MISSING;
return;
}
}
void decodeDataRecord() override
{
/* Check ID. */
{
/* Check ID. */
if (readRaw24() != APPLE2_DATA_RECORD)
return;
if (readRaw24() != APPLE2_DATA_RECORD)
return;
// Sometimes there's a 1-bit gap between APPLE2_DATA_RECORD and
// the data itself. This has been seen on real world disks
// such as the Apple II Operating System Kit from Apple2Online.
// However, I haven't seen it described in any of the various
// references.
//
// This extra '0' bit would not affect the real disk interface,
// as it was a '1' reaching the top bit of a shift register
// that triggered a byte to be available, but it affects the
// way the data is read here.
//
// While the floppies tested only seemed to need this applied
// to the first byte of the data record, applying it
// consistently to all of them doesn't seem to hurt, and
// simplifies the code.
// Sometimes there's a 1-bit gap between APPLE2_DATA_RECORD and
// the data itself. This has been seen on real world disks
// such as the Apple II Operating System Kit from Apple2Online.
// However, I haven't seen it described in any of the various
// references.
//
// This extra '0' bit would not affect the real disk interface,
// as it was a '1' reaching the top bit of a shift register
// that triggered a byte to be available, but it affects the
// way the data is read here.
//
// While the floppies tested only seemed to need this applied
// to the first byte of the data record, applying it
// consistently to all of them doesn't seem to hurt, and
// simplifies the code.
/* Read and decode data. */
/* Read and decode data. */
auto readApple8 = [&]() {
auto result = 0;
while((result & 0x80) == 0) {
auto b = readRawBits(1);
if(b.empty()) break;
result = (result << 1) | b[0];
}
return result;
};
auto readApple8 = [&]()
{
auto result = 0;
while ((result & 0x80) == 0)
{
auto b = readRawBits(1);
if (b.empty())
break;
result = (result << 1) | b[0];
}
return result;
};
constexpr unsigned recordLength = APPLE2_ENCODED_SECTOR_LENGTH+2;
uint8_t bytes[recordLength];
for(auto &byte : bytes) {
byte = readApple8();
}
constexpr unsigned recordLength = APPLE2_ENCODED_SECTOR_LENGTH + 2;
uint8_t bytes[recordLength];
for (auto& byte : bytes)
{
byte = readApple8();
}
// Upgrade the sector from MISSING to BAD_CHECKSUM.
// If decode_crazy_data succeeds, it upgrades the sector to
// OK.
_sector->status = Sector::BAD_CHECKSUM;
_sector->data = decode_crazy_data(&bytes[0], _sector->status);
}
// Upgrade the sector from MISSING to BAD_CHECKSUM.
// If decode_crazy_data succeeds, it upgrades the sector to
// OK.
_sector->status = Sector::BAD_CHECKSUM;
_sector->data = decode_crazy_data(&bytes[0], _sector->status);
}
};
std::unique_ptr<Decoder> createApple2Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Apple2Decoder(config));
return std::unique_ptr<Decoder>(new Apple2Decoder(config));
}

View File

@@ -50,14 +50,12 @@ public:
writeSector(bits, cursor, *sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(
_config.clock_period_us() * 1e3,
calculatePhysicalClockPeriod(_config.clock_period_us() * 1e3,
_config.rotational_period_ms() * 1e6));
return fluxmap;
}
@@ -119,8 +117,7 @@ private:
// There is data to encode to disk.
if ((sector.data.size() != APPLE2_SECTOR_LENGTH))
Error() << fmt::format(
"unsupported sector size {} --- you must pick 256",
error("unsupported sector size {} --- you must pick 256",
sector.data.size());
// Write address syncing leader : A sequence of "FF40"s; 5 of them
@@ -132,13 +129,17 @@ private:
// extra padding.
write_ff40(sector.logicalSector == 0 ? 32 : 8);
int track = sector.logicalTrack;
if (sector.logicalSide == 1)
track += _config.side_one_track_offset();
// Write address field: APPLE2_SECTOR_RECORD + sector identifier +
// DE AA EB
write_bits(APPLE2_SECTOR_RECORD, 24);
write_gcr44(volume_id);
write_gcr44(sector.logicalTrack);
write_gcr44(track);
write_gcr44(sector.logicalSector);
write_gcr44(volume_id ^ sector.logicalTrack ^ sector.logicalSector);
write_gcr44(volume_id ^ track ^ sector.logicalSector);
write_bits(0xDEAAEB, 24);
// Write data syncing leader: FF40 + APPLE2_DATA_RECORD + sector

View File

@@ -3,17 +3,19 @@
/* Brother word processor format (or at least, one of them) */
#define BROTHER_SECTOR_RECORD 0xFFFFFD57
#define BROTHER_DATA_RECORD 0xFFFFFDDB
#define BROTHER_DATA_RECORD_PAYLOAD 256
#define BROTHER_DATA_RECORD_CHECKSUM 3
#define BROTHER_SECTOR_RECORD 0xFFFFFD57
#define BROTHER_DATA_RECORD 0xFFFFFDDB
#define BROTHER_DATA_RECORD_PAYLOAD 256
#define BROTHER_DATA_RECORD_CHECKSUM 3
#define BROTHER_DATA_RECORD_ENCODED_SIZE 415
#define BROTHER_TRACKS_PER_240KB_DISK 78
#define BROTHER_TRACKS_PER_120KB_DISK 39
#define BROTHER_SECTORS_PER_TRACK 12
#define BROTHER_TRACKS_PER_240KB_DISK 78
#define BROTHER_TRACKS_PER_120KB_DISK 39
#define BROTHER_SECTORS_PER_TRACK 12
extern std::unique_ptr<Decoder> createBrotherDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createBrotherEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createBrotherDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createBrotherEncoder(
const EncoderProto& config);
#endif

View File

@@ -1,13 +1,13 @@
GCR_ENTRY(0x55, 0) // 00000
GCR_ENTRY(0x57, 1) // 00001
GCR_ENTRY(0x5b, 2) // 00010
GCR_ENTRY(0x5d, 3) // 00011
GCR_ENTRY(0x5f, 4) // 00100
GCR_ENTRY(0x6b, 5) // 00101
GCR_ENTRY(0x6d, 6) // 00110
GCR_ENTRY(0x6f, 7) // 00111
GCR_ENTRY(0x75, 8) // 01000
GCR_ENTRY(0x77, 9) // 01001
GCR_ENTRY(0x55, 0) // 00000
GCR_ENTRY(0x57, 1) // 00001
GCR_ENTRY(0x5b, 2) // 00010
GCR_ENTRY(0x5d, 3) // 00011
GCR_ENTRY(0x5f, 4) // 00100
GCR_ENTRY(0x6b, 5) // 00101
GCR_ENTRY(0x6d, 6) // 00110
GCR_ENTRY(0x6f, 7) // 00111
GCR_ENTRY(0x75, 8) // 01000
GCR_ENTRY(0x77, 9) // 01001
GCR_ENTRY(0x7b, 10) // 01010
GCR_ENTRY(0x7d, 11) // 01011
GCR_ENTRY(0x7f, 12) // 01100
@@ -30,4 +30,3 @@ GCR_ENTRY(0xef, 28) // 11100
GCR_ENTRY(0xf5, 29) // 11101
GCR_ENTRY(0xf7, 30) // 11110
GCR_ENTRY(0xfb, 31) // 11111

View File

@@ -11,7 +11,8 @@
const FluxPattern SECTOR_RECORD_PATTERN(32, BROTHER_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(32, BROTHER_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static std::vector<uint8_t> outputbuffer;
@@ -32,88 +33,89 @@ static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
static int decode_header_gcr(uint16_t word)
{
switch (word)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "header_gcr.h"
#undef GCR_ENTRY
}
return -1;
switch (word)
{
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "header_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
class BrotherDecoder : public Decoder
{
public:
BrotherDecoder(const DecoderProto& config):
Decoder(config)
{}
BrotherDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw32() != BROTHER_SECTOR_RECORD)
return;
{
if (readRaw32() != BROTHER_SECTOR_RECORD)
return;
const auto& rawbits = readRawBits(32);
const auto& bytes = toBytes(rawbits).slice(0, 4);
const auto& rawbits = readRawBits(32);
const auto& bytes = toBytes(rawbits).slice(0, 4);
ByteReader br(bytes);
_sector->logicalTrack = decode_header_gcr(br.read_be16());
_sector->logicalSector = decode_header_gcr(br.read_be16());
ByteReader br(bytes);
_sector->logicalTrack = decode_header_gcr(br.read_be16());
_sector->logicalSector = decode_header_gcr(br.read_be16());
/* Sanity check the values read; there's no header checksum and
* occasionally we get garbage due to bit errors. */
if (_sector->logicalSector > 11)
return;
if (_sector->logicalTrack > 79)
return;
/* Sanity check the values read; there's no header checksum and
* occasionally we get garbage due to bit errors. */
if (_sector->logicalSector > 11)
return;
if (_sector->logicalTrack > 79)
return;
_sector->status = Sector::DATA_MISSING;
}
_sector->status = Sector::DATA_MISSING;
}
void decodeDataRecord() override
{
if (readRaw32() != BROTHER_DATA_RECORD)
return;
{
if (readRaw32() != BROTHER_DATA_RECORD)
return;
const auto& rawbits = readRawBits(BROTHER_DATA_RECORD_ENCODED_SIZE*8);
const auto& rawbytes = toBytes(rawbits).slice(0, BROTHER_DATA_RECORD_ENCODED_SIZE);
const auto& rawbits = readRawBits(BROTHER_DATA_RECORD_ENCODED_SIZE * 8);
const auto& rawbytes =
toBytes(rawbits).slice(0, BROTHER_DATA_RECORD_ENCODED_SIZE);
Bytes bytes;
ByteWriter bw(bytes);
BitWriter bitw(bw);
for (uint8_t b : rawbytes)
{
uint32_t nibble = decode_data_gcr(b);
bitw.push(nibble, 5);
}
bitw.flush();
Bytes bytes;
ByteWriter bw(bytes);
BitWriter bitw(bw);
for (uint8_t b : rawbytes)
{
uint32_t nibble = decode_data_gcr(b);
bitw.push(nibble, 5);
}
bitw.flush();
_sector->data = bytes.slice(0, BROTHER_DATA_RECORD_PAYLOAD);
uint32_t realCrc = crcbrother(_sector->data);
uint32_t wantCrc = bytes.reader().seek(BROTHER_DATA_RECORD_PAYLOAD).read_be24();
_sector->status = (realCrc == wantCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = bytes.slice(0, BROTHER_DATA_RECORD_PAYLOAD);
uint32_t realCrc = crcbrother(_sector->data);
uint32_t wantCrc =
bytes.reader().seek(BROTHER_DATA_RECORD_PAYLOAD).read_be24();
_sector->status =
(realCrc == wantCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createBrotherDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new BrotherDecoder(config));
return std::unique_ptr<Decoder>(new BrotherDecoder(config));
}

View File

@@ -67,7 +67,7 @@ static void write_sector_data(
int width = 0;
if (data.size() != BROTHER_DATA_RECORD_PAYLOAD)
Error() << "unsupported sector size";
error("unsupported sector size");
auto write_byte = [&](uint8_t byte)
{
@@ -107,8 +107,7 @@ public:
}
public:
std::unique_ptr<Fluxmap> encode(
std::shared_ptr<const TrackInfo>& trackInfo,
std::unique_ptr<Fluxmap> encode(std::shared_ptr<const TrackInfo>& trackInfo,
const std::vector<std::shared_ptr<const Sector>>& sectors,
const Image& image) override
{
@@ -116,8 +115,8 @@ public:
std::vector<bool> bits(bitsPerRevolution);
unsigned cursor = 0;
int sectorCount = 0;
for (const auto& sectorData : sectors)
int sectorCount = 0;
for (const auto& sectorData : sectors)
{
double headerMs = _config.post_index_gap_ms() +
sectorCount * _config.sector_spacing_ms();
@@ -126,16 +125,18 @@ public:
unsigned dataCursor = dataMs * 1e3 / _config.clock_rate_us();
fillBitmapTo(bits, cursor, headerCursor, {true, false});
write_sector_header(
bits, cursor, sectorData->logicalTrack, sectorData->logicalSector);
write_sector_header(bits,
cursor,
sectorData->logicalTrack,
sectorData->logicalSector);
fillBitmapTo(bits, cursor, dataCursor, {true, false});
write_sector_data(bits, cursor, sectorData->data);
sectorCount++;
sectorCount++;
}
if (cursor >= bits.size())
Error() << "track data overrun";
error("track data overrun");
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -147,8 +148,7 @@ private:
const BrotherEncoderProto& _config;
};
std::unique_ptr<Encoder> createBrotherEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createBrotherEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new BrotherEncoder(config));
}

View File

@@ -76,4 +76,3 @@ GCR_ENTRY(0x6BAB, 74)
GCR_ENTRY(0xAD5F, 75)
GCR_ENTRY(0xDBED, 76)
GCR_ENTRY(0x55BB, 77)

View File

@@ -2,6 +2,7 @@ LIBARCH_SRCS = \
arch/aeslanier/decoder.cc \
arch/agat/agat.cc \
arch/agat/decoder.cc \
arch/agat/encoder.cc \
arch/amiga/amiga.cc \
arch/amiga/decoder.cc \
arch/amiga/encoder.cc \
@@ -23,6 +24,7 @@ LIBARCH_SRCS = \
arch/mx/decoder.cc \
arch/northstar/decoder.cc \
arch/northstar/encoder.cc \
arch/rolandd20/decoder.cc \
arch/smaky6/decoder.cc \
arch/tids990/decoder.cc \
arch/tids990/encoder.cc \
@@ -35,8 +37,7 @@ OBJS += $(LIBARCH_OBJS)
$(LIBARCH_SRCS): | $(PROTO_HDRS)
$(LIBARCH_SRCS): CFLAGS += $(PROTO_CFLAGS)
LIBARCH_LIB = $(OBJDIR)/libarch.a
LIBARCH_LDFLAGS =
$(LIBARCH_LIB): $(LIBARCH_OBJS)
LIBARCH_LDFLAGS = $(LIBARCH_LIB)
$(call use-pkgconfig, $(LIBARCH_LIB), $(LIBARCH_OBJS), fmt)

View File

@@ -2,27 +2,27 @@
#include "c64.h"
/*
* Track Sectors/track # Sectors Storage in Bytes Clock rate
* ----- ------------- --------- ---------------- ----------
* 1-17 21 357 7820 3.25
* 18-24 19 133 7170 3.5
* 25-30 18 108 6300 3.75
* 31-40(*) 17 85 6020 4
* ---
* 683 (for a 35 track image)
*
* The clock rate is normalised for a 200ms drive.
*/
* Track Sectors/track # Sectors Storage in Bytes Clock rate
* ----- ------------- --------- ---------------- ----------
* 1-17 21 357 7820 3.25
* 18-24 19 133 7170 3.5
* 25-30 18 108 6300 3.75
* 31-40(*) 17 85 6020 4
* ---
* 683 (for a 35 track image)
*
* The clock rate is normalised for a 200ms drive.
*/
nanoseconds_t clockPeriodForC64Track(unsigned track)
{
constexpr double BYTE_SIZE = 8.0;
constexpr double b = 8.0;
if (track < 17)
return 26.0 / BYTE_SIZE;
return 26.0 / b;
if (track < 24)
return 28.0 / BYTE_SIZE;
return 28.0 / b;
if (track < 30)
return 30.0 / BYTE_SIZE;
return 32.0 / BYTE_SIZE;
return 30.0 / b;
return 32.0 / b;
}

View File

@@ -4,11 +4,11 @@
#include "decoders/decoders.h"
#include "encoders/encoders.h"
#define C64_SECTOR_RECORD 0xffd49
#define C64_DATA_RECORD 0xffd57
#define C64_SECTOR_LENGTH 256
#define C64_SECTOR_RECORD 0xffd49
#define C64_DATA_RECORD 0xffd57
#define C64_SECTOR_LENGTH 256
/* Source: http://www.unusedino.de/ec64/technical/formats/g64.html
/* Source: http://www.unusedino.de/ec64/technical/formats/g64.html
1. Header sync FF FF FF FF FF (40 'on' bits, not GCR)
2. Header info 52 54 B5 29 4B 7A 5E 95 55 55 (10 GCR bytes)
3. Header gap 55 55 55 55 55 55 55 55 55 (9 bytes, never read)
@@ -17,18 +17,20 @@
6. Inter-sector gap 55 55 55 55...55 55 (4 to 12 bytes, never read)
1. Header sync (SYNC for the next sector)
*/
#define C64_HEADER_DATA_SYNC 0xFF
#define C64_HEADER_BLOCK_ID 0x08
#define C64_DATA_BLOCK_ID 0x07
#define C64_HEADER_GAP 0x55
#define C64_INTER_SECTOR_GAP 0x55
#define C64_PADDING 0x0F
#define C64_HEADER_DATA_SYNC 0xFF
#define C64_HEADER_BLOCK_ID 0x08
#define C64_DATA_BLOCK_ID 0x07
#define C64_HEADER_GAP 0x55
#define C64_INTER_SECTOR_GAP 0x55
#define C64_PADDING 0x0F
#define C64_TRACKS_PER_DISK 40
#define C64_BAM_TRACK 17
#define C64_TRACKS_PER_DISK 40
#define C64_BAM_TRACK 17
extern std::unique_ptr<Decoder> createCommodore64Decoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createCommodore64Encoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createCommodore64Decoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createCommodore64Encoder(
const EncoderProto& config);
extern nanoseconds_t clockPeriodForC64Track(unsigned track);

View File

@@ -96,8 +96,7 @@ public:
}
};
std::unique_ptr<Decoder> createCommodore64Decoder(
const DecoderProto& config)
std::unique_ptr<Decoder> createCommodore64Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Commodore64Decoder(config));
}

View File

@@ -51,26 +51,6 @@ static void write_bits(
}
}
void bindump(std::ostream& stream, std::vector<bool>& buffer)
{
size_t pos = 0;
while ((pos < buffer.size()) and (pos < 520))
{
stream << fmt::format("{:5d} : ", pos);
for (int i = 0; i < 40; i++)
{
if ((pos + i) < buffer.size())
stream << fmt::format("{:01b}", (buffer[pos + i]));
else
stream << "-- ";
if ((((pos + i + 1) % 8) == 0) and i != 0)
stream << " ";
}
stream << std::endl;
pos += 40;
}
}
static std::vector<bool> encode_data(uint8_t input)
{
/*
@@ -214,8 +194,7 @@ public:
writeSector(bits, cursor, sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -243,8 +222,7 @@ private:
{
// There is data to encode to disk.
if ((sector->data.size() != C64_SECTOR_LENGTH))
Error() << fmt::format(
"unsupported sector size {} --- you must pick 256",
error("unsupported sector size {} --- you must pick 256",
sector->data.size());
// 1. Write header Sync (not GCR)

View File

@@ -13,16 +13,18 @@
const FluxPattern SECTOR_RECORD_PATTERN(24, F85_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(24, F85_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
@@ -37,11 +39,11 @@ static Bytes decode(const std::vector<bool>& bits)
while (ii != bits.end())
{
uint8_t inputfifo = 0;
for (size_t i=0; i<5; i++)
for (size_t i = 0; i < 5; i++)
{
if (ii == bits.end())
break;
inputfifo = (inputfifo<<1) | *ii++;
inputfifo = (inputfifo << 1) | *ii++;
}
bitw.push(decode_data_gcr(inputfifo), 4);
@@ -54,56 +56,55 @@ static Bytes decode(const std::vector<bool>& bits)
class DurangoF85Decoder : public Decoder
{
public:
DurangoF85Decoder(const DecoderProto& config):
Decoder(config)
{}
DurangoF85Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
/* Skip sync bits and ID byte. */
{
/* Skip sync bits and ID byte. */
if (readRaw24() != F85_SECTOR_RECORD)
return;
if (readRaw24() != F85_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
const auto& bytes = decode(readRawBits(6*10));
const auto& bytes = decode(readRawBits(6 * 10));
_sector->logicalSector = bytes[2];
_sector->logicalSide = 0;
_sector->logicalTrack = bytes[0];
_sector->logicalSector = bytes[2];
_sector->logicalSide = 0;
_sector->logicalTrack = bytes[0];
uint16_t wantChecksum = bytes.reader().seek(4).read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xef21, bytes.slice(0, 4));
if (wantChecksum == gotChecksum)
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
uint16_t wantChecksum = bytes.reader().seek(4).read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xef21, bytes.slice(0, 4));
if (wantChecksum == gotChecksum)
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
/* Skip sync bits ID byte. */
{
/* Skip sync bits ID byte. */
if (readRaw24() != F85_DATA_RECORD)
return;
if (readRaw24() != F85_DATA_RECORD)
return;
const auto& bytes = decode(readRawBits((F85_SECTOR_LENGTH+3)*10))
.slice(0, F85_SECTOR_LENGTH+3);
ByteReader br(bytes);
const auto& bytes = decode(readRawBits((F85_SECTOR_LENGTH + 3) * 10))
.slice(0, F85_SECTOR_LENGTH + 3);
ByteReader br(bytes);
_sector->data = br.read(F85_SECTOR_LENGTH);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xbf84, _sector->data);
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(F85_SECTOR_LENGTH);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xbf84, _sector->data);
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createDurangoF85Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new DurangoF85Decoder(config));
return std::unique_ptr<Decoder>(new DurangoF85Decoder(config));
}

View File

@@ -2,9 +2,10 @@
#define F85_H
#define F85_SECTOR_RECORD 0xffffce /* 1111 1111 1111 1111 1100 1110 */
#define F85_DATA_RECORD 0xffffcb /* 1111 1111 1111 1111 1100 1101 */
#define F85_SECTOR_LENGTH 512
#define F85_DATA_RECORD 0xffffcb /* 1111 1111 1111 1111 1100 1101 */
#define F85_SECTOR_LENGTH 512
extern std::unique_ptr<Decoder> createDurangoF85Decoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createDurangoF85Decoder(
const DecoderProto& config);
#endif

View File

@@ -14,10 +14,10 @@
const FluxPattern SECTOR_ID_PATTERN(16, 0xabaa);
/*
/*
* Reverse engineered from a dump of the floppy drive's ROM. I have no idea how
* it works.
*
*
* LF8BA:
* clra
* staa X00B0
@@ -100,45 +100,43 @@ static uint16_t checksum(const Bytes& bytes)
class Fb100Decoder : public Decoder
{
public:
Fb100Decoder(const DecoderProto& config):
Decoder(config)
{}
Fb100Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_ID_PATTERN);
}
{
return seekToPattern(SECTOR_ID_PATTERN);
}
void decodeSectorRecord() override
{
auto rawbits = readRawBits(FB100_RECORD_SIZE*16);
{
auto rawbits = readRawBits(FB100_RECORD_SIZE * 16);
const Bytes bytes = decodeFmMfm(rawbits).slice(0, FB100_RECORD_SIZE);
ByteReader br(bytes);
br.seek(1);
const Bytes id = br.read(FB100_ID_SIZE);
uint16_t wantIdCrc = br.read_be16();
uint16_t gotIdCrc = checksum(id);
const Bytes payload = br.read(FB100_PAYLOAD_SIZE);
uint16_t wantPayloadCrc = br.read_be16();
uint16_t gotPayloadCrc = checksum(payload);
const Bytes bytes = decodeFmMfm(rawbits).slice(0, FB100_RECORD_SIZE);
ByteReader br(bytes);
br.seek(1);
const Bytes id = br.read(FB100_ID_SIZE);
uint16_t wantIdCrc = br.read_be16();
uint16_t gotIdCrc = checksum(id);
const Bytes payload = br.read(FB100_PAYLOAD_SIZE);
uint16_t wantPayloadCrc = br.read_be16();
uint16_t gotPayloadCrc = checksum(payload);
if (wantIdCrc != gotIdCrc)
return;
if (wantIdCrc != gotIdCrc)
return;
uint8_t abssector = id[2];
_sector->logicalTrack = abssector >> 1;
_sector->logicalSide = 0;
_sector->logicalSector = abssector & 1;
_sector->data.writer().append(id.slice(5, 12)).append(payload);
uint8_t abssector = id[2];
_sector->logicalTrack = abssector >> 1;
_sector->logicalSide = 0;
_sector->logicalSector = abssector & 1;
_sector->data.writer().append(id.slice(5, 12)).append(payload);
_sector->status = (wantPayloadCrc == gotPayloadCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->status = (wantPayloadCrc == gotPayloadCrc)
? Sector::OK
: Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createFb100Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Fb100Decoder(config));
return std::unique_ptr<Decoder>(new Fb100Decoder(config));
}

View File

@@ -8,4 +8,3 @@
extern std::unique_ptr<Decoder> createFb100Decoder(const DecoderProto& config);
#endif

View File

@@ -147,6 +147,7 @@ public:
_sector->logicalSide = br.read_8();
_sector->logicalSector = br.read_8();
_currentSectorSize = 1 << (br.read_8() + 7);
uint16_t gotCrc = crc16(CCITT_POLY, bytes.slice(0, br.pos));
uint16_t wantCrc = br.read_be16();
if (wantCrc == gotCrc)
@@ -206,6 +207,18 @@ public:
uint16_t wantCrc = br.read_be16();
_sector->status =
(wantCrc == gotCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
auto layout = Layout::getLayoutOfTrack(
_sector->logicalTrack, _sector->logicalSide);
if (_currentSectorSize != layout->sectorSize)
std::cerr << fmt::format(
"Warning: configured sector size for t{}.h{}.s{} is {} bytes "
"but that seen on disk is {} bytes\n",
_sector->logicalTrack,
_sector->logicalSide,
_sector->logicalSector,
layout->sectorSize,
_currentSectorSize);
}
private:

View File

@@ -112,10 +112,11 @@ public:
const Image& image) override
{
IbmEncoderProto::TrackdataProto trackdata;
getEncoderTrackData(trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
getEncoderTrackData(
trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
auto trackLayout =
Layout::getLayoutOfTrack(trackInfo->logicalTrack, trackInfo->logicalSide);
auto trackLayout = Layout::getLayoutOfTrack(
trackInfo->logicalTrack, trackInfo->logicalSide);
auto writeBytes = [&](const Bytes& bytes)
{
@@ -257,7 +258,7 @@ public:
}
if (_cursor >= _bits.size())
Error() << "track data overrun";
error("track data overrun");
while (_cursor < _bits.size())
writeFillerRawBytes(1, gapFill);

View File

@@ -31,9 +31,7 @@ class Decoder;
class DecoderProto;
class EncoderProto;
extern std::unique_ptr<Decoder> createIbmDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createIbmEncoder(
const EncoderProto& config);
extern std::unique_ptr<Decoder> createIbmDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createIbmEncoder(const EncoderProto& config);
#endif

View File

@@ -12,22 +12,25 @@
const FluxPattern SECTOR_RECORD_PATTERN(24, MAC_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(24, MAC_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
/* This is extremely inspired by the MESS implementation, written by Nathan Woods
* and R. Belmont: https://github.com/mamedev/mame/blob/4263a71e64377db11392c458b580c5ae83556bc7/src/lib/formats/ap_dsk35.cpp
/* This is extremely inspired by the MESS implementation, written by Nathan
* Woods and R. Belmont:
* https://github.com/mamedev/mame/blob/4263a71e64377db11392c458b580c5ae83556bc7/src/lib/formats/ap_dsk35.cpp
*/
static Bytes decode_crazy_data(const Bytes& input, Sector::Status& status)
{
@@ -41,7 +44,7 @@ static Bytes decode_crazy_data(const Bytes& input, Sector::Status& status)
uint8_t b2[LOOKUP_LEN + 1];
uint8_t b3[LOOKUP_LEN + 1];
for (int i=0; i<=LOOKUP_LEN; i++)
for (int i = 0; i <= LOOKUP_LEN; i++)
{
uint8_t w4 = br.read_8();
uint8_t w1 = br.read_8();
@@ -125,67 +128,68 @@ uint8_t decode_side(uint8_t side)
class MacintoshDecoder : public Decoder
{
public:
MacintoshDecoder(const DecoderProto& config):
Decoder(config)
{}
MacintoshDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw24() != MAC_SECTOR_RECORD)
return;
{
if (readRaw24() != MAC_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
auto header = toBytes(readRawBits(7*8)).slice(0, 7);
uint8_t encodedTrack = decode_data_gcr(header[0]);
if (encodedTrack != (_sector->physicalTrack & 0x3f))
return;
uint8_t encodedSector = decode_data_gcr(header[1]);
uint8_t encodedSide = decode_data_gcr(header[2]);
uint8_t formatByte = decode_data_gcr(header[3]);
uint8_t wantedsum = decode_data_gcr(header[4]);
auto header = toBytes(readRawBits(7 * 8)).slice(0, 7);
if (encodedSector > 11)
return;
uint8_t encodedTrack = decode_data_gcr(header[0]);
if (encodedTrack != (_sector->physicalTrack & 0x3f))
return;
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = decode_side(encodedSide);
_sector->logicalSector = encodedSector;
uint8_t gotsum = (encodedTrack ^ encodedSector ^ encodedSide ^ formatByte) & 0x3f;
if (wantedsum == gotsum)
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
uint8_t encodedSector = decode_data_gcr(header[1]);
uint8_t encodedSide = decode_data_gcr(header[2]);
uint8_t formatByte = decode_data_gcr(header[3]);
uint8_t wantedsum = decode_data_gcr(header[4]);
if (encodedSector > 11)
return;
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = decode_side(encodedSide);
_sector->logicalSector = encodedSector;
uint8_t gotsum =
(encodedTrack ^ encodedSector ^ encodedSide ^ formatByte) & 0x3f;
if (wantedsum == gotsum)
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
if (readRaw24() != MAC_DATA_RECORD)
return;
{
if (readRaw24() != MAC_DATA_RECORD)
return;
/* Read data. */
/* Read data. */
readRawBits(8); /* skip spare byte */
auto inputbuffer = toBytes(readRawBits(MAC_ENCODED_SECTOR_LENGTH*8))
.slice(0, MAC_ENCODED_SECTOR_LENGTH);
readRawBits(8); /* skip spare byte */
auto inputbuffer = toBytes(readRawBits(MAC_ENCODED_SECTOR_LENGTH * 8))
.slice(0, MAC_ENCODED_SECTOR_LENGTH);
for (unsigned i=0; i<inputbuffer.size(); i++)
inputbuffer[i] = decode_data_gcr(inputbuffer[i]);
_sector->status = Sector::BAD_CHECKSUM;
Bytes userData = decode_crazy_data(inputbuffer, _sector->status);
_sector->data.clear();
_sector->data.writer().append(userData.slice(12, 512)).append(userData.slice(0, 12));
}
for (unsigned i = 0; i < inputbuffer.size(); i++)
inputbuffer[i] = decode_data_gcr(inputbuffer[i]);
_sector->status = Sector::BAD_CHECKSUM;
Bytes userData = decode_crazy_data(inputbuffer, _sector->status);
_sector->data.clear();
_sector->data.writer()
.append(userData.slice(12, 512))
.append(userData.slice(0, 12));
}
};
std::unique_ptr<Decoder> createMacintoshDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new MacintoshDecoder(config));
return std::unique_ptr<Decoder>(new MacintoshDecoder(config));
}

View File

@@ -16,14 +16,14 @@ static bool lastBit;
static double clockRateUsForTrack(unsigned track)
{
if (track < 16)
return 2.623;
return 2.63;
if (track < 32)
return 2.861;
return 2.89;
if (track < 48)
return 3.148;
return 3.20;
if (track < 64)
return 3.497;
return 3.934;
return 3.57;
return 3.98;
}
static unsigned sectorsForTrack(unsigned track)
@@ -174,7 +174,7 @@ static void write_sector(std::vector<bool>& bits,
const std::shared_ptr<const Sector>& sector)
{
if ((sector->data.size() != 512) && (sector->data.size() != 524))
Error() << "unsupported sector size --- you must pick 512 or 524";
error("unsupported sector size --- you must pick 512 or 524");
write_bits(bits, cursor, 0xff, 1 * 8); /* pad byte */
for (int i = 0; i < 7; i++)
@@ -239,13 +239,12 @@ public:
write_sector(bits, cursor, sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(clockRateUs * 1e3, 200e6));
fluxmap->appendBits(
bits, calculatePhysicalClockPeriod(clockRateUs * 1e3, 200e6));
return fluxmap;
}
@@ -253,8 +252,7 @@ private:
const MacintoshEncoderProto& _config;
};
std::unique_ptr<Encoder> createMacintoshEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createMacintoshEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new MacintoshEncoder(config));
}

View File

@@ -1,12 +1,12 @@
#ifndef MACINTOSH_H
#define MACINTOSH_H
#define MAC_SECTOR_RECORD 0xd5aa96 /* 1101 0101 1010 1010 1001 0110 */
#define MAC_DATA_RECORD 0xd5aaad /* 1101 0101 1010 1010 1010 1101 */
#define MAC_SECTOR_RECORD 0xd5aa96 /* 1101 0101 1010 1010 1001 0110 */
#define MAC_DATA_RECORD 0xd5aaad /* 1101 0101 1010 1010 1010 1101 */
#define MAC_SECTOR_LENGTH 524 /* yes, really */
#define MAC_SECTOR_LENGTH 524 /* yes, really */
#define MAC_ENCODED_SECTOR_LENGTH 703
#define MAC_FORMAT_BYTE 0x22
#define MAC_FORMAT_BYTE 0x22
#define MAC_TRACKS_PER_DISK 80
@@ -15,8 +15,9 @@ class Decoder;
class DecoderProto;
class EncoderProto;
extern std::unique_ptr<Decoder> createMacintoshDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createMacintoshEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createMacintoshDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createMacintoshEncoder(
const EncoderProto& config);
#endif

View File

@@ -20,17 +20,20 @@ static const FluxPattern SECTOR_SYNC_PATTERN(64, 0xAAAAAAAAAAAA5555LL);
static const FluxPattern SECTOR_ADVANCE_PATTERN(64, 0xAAAAAAAAAAAAAAAALL);
/* Standard Micropolis checksum. Adds all bytes, with carry. */
uint8_t micropolisChecksum(const Bytes& bytes) {
ByteReader br(bytes);
uint16_t sum = 0;
while (!br.eof()) {
if (sum > 0xFF) {
sum -= 0x100 - 1;
}
sum += br.read_8();
}
/* The last carry is ignored */
return sum & 0xFF;
uint8_t micropolisChecksum(const Bytes& bytes)
{
ByteReader br(bytes);
uint16_t sum = 0;
while (!br.eof())
{
if (sum > 0xFF)
{
sum -= 0x100 - 1;
}
sum += br.read_8();
}
/* The last carry is ignored */
return sum & 0xFF;
}
/* Vector MZOS does not use the standard Micropolis checksum.
@@ -41,145 +44,164 @@ uint8_t micropolisChecksum(const Bytes& bytes) {
* Unlike the Micropolis checksum, this does not cover the 12-byte
* header (track, sector, 10 OS-specific bytes.)
*/
uint8_t mzosChecksum(const Bytes& bytes) {
ByteReader br(bytes);
uint8_t checksum = 0;
uint8_t databyte;
uint8_t mzosChecksum(const Bytes& bytes)
{
ByteReader br(bytes);
uint8_t checksum = 0;
uint8_t databyte;
while (!br.eof()) {
databyte = br.read_8();
checksum ^= ((databyte << 1) | (databyte >> 7));
}
while (!br.eof())
{
databyte = br.read_8();
checksum ^= ((databyte << 1) | (databyte >> 7));
}
return checksum;
return checksum;
}
class MicropolisDecoder : public Decoder
{
public:
MicropolisDecoder(const DecoderProto& config):
Decoder(config),
_config(config.micropolis())
{
_checksumType = _config.checksum_type();
}
MicropolisDecoder(const DecoderProto& config):
Decoder(config),
_config(config.micropolis())
{
_checksumType = _config.checksum_type();
}
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0) {
seekToIndexMark();
now = tell().ns();
}
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0)
{
seekToIndexMark();
now = tell().ns();
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 12.5e6)) {
seekToIndexMark();
return 0;
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 12.5e6))
{
seekToIndexMark();
return 0;
}
nanoseconds_t clock = seekToPattern(SECTOR_SYNC_PATTERN);
nanoseconds_t clock = seekToPattern(SECTOR_SYNC_PATTERN);
auto syncDelta = tell().ns() - now;
/* Due to the weak nature of the Micropolis SYNC patern,
* it's possible to detect a false SYNC during the gap
* between the sector pulse and the write gate. If the SYNC
* is detected less than 100uS after the sector pulse, search
* for another valid SYNC.
*
* Reference: Vector Micropolis Disk Controller Board Technical
* Information Manual, pp. 1-16.
*/
if ((syncDelta > 0) && (syncDelta < 100e3)) {
seekToPattern(SECTOR_ADVANCE_PATTERN);
clock = seekToPattern(SECTOR_SYNC_PATTERN);
}
auto syncDelta = tell().ns() - now;
/* Due to the weak nature of the Micropolis SYNC patern,
* it's possible to detect a false SYNC during the gap
* between the sector pulse and the write gate. If the SYNC
* is detected less than 100uS after the sector pulse, search
* for another valid SYNC.
*
* Reference: Vector Micropolis Disk Controller Board Technical
* Information Manual, pp. 1-16.
*/
if ((syncDelta > 0) && (syncDelta < 100e3))
{
seekToPattern(SECTOR_ADVANCE_PATTERN);
clock = seekToPattern(SECTOR_SYNC_PATTERN);
}
_sector->headerStartTime = tell().ns();
_sector->headerStartTime = tell().ns();
/* seekToPattern() can skip past the index hole, if this happens
* too close to the end of the Fluxmap, discard the sector.
*/
if (_sector->headerStartTime > (getFluxmapDuration() - 12.5e6)) {
return 0;
}
/* seekToPattern() can skip past the index hole, if this happens
* too close to the end of the Fluxmap, discard the sector.
*/
if (_sector->headerStartTime > (getFluxmapDuration() - 12.5e6))
{
return 0;
}
return clock;
}
return clock;
}
void decodeSectorRecord() override
{
readRawBits(48);
auto rawbits = readRawBits(MICROPOLIS_ENCODED_SECTOR_SIZE*16);
auto bytes = decodeFmMfm(rawbits).slice(0, MICROPOLIS_ENCODED_SECTOR_SIZE);
ByteReader br(bytes);
void decodeSectorRecord() override
{
readRawBits(48);
auto rawbits = readRawBits(MICROPOLIS_ENCODED_SECTOR_SIZE * 16);
auto bytes =
decodeFmMfm(rawbits).slice(0, MICROPOLIS_ENCODED_SECTOR_SIZE);
ByteReader br(bytes);
int syncByte = br.read_8(); /* sync */
if (syncByte != 0xFF)
return;
int syncByte = br.read_8(); /* sync */
if (syncByte != 0xFF)
return;
_sector->logicalTrack = br.read_8();
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = br.read_8();
if (_sector->logicalSector > 15)
return;
if (_sector->logicalTrack > 76)
return;
if (_sector->logicalTrack != _sector->physicalTrack)
return;
_sector->logicalTrack = br.read_8();
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = br.read_8();
if (_sector->logicalSector > 15)
return;
if (_sector->logicalTrack > 76)
return;
if (_sector->logicalTrack != _sector->physicalTrack)
return;
br.read(10); /* OS data or padding */
auto data = br.read(MICROPOLIS_PAYLOAD_SIZE);
uint8_t wantChecksum = br.read_8();
br.read(10); /* OS data or padding */
auto data = br.read(MICROPOLIS_PAYLOAD_SIZE);
uint8_t wantChecksum = br.read_8();
/* If not specified, automatically determine the checksum type.
* Once the checksum type is determined, it will be used for the
* entire disk.
*/
if (_checksumType == MicropolisDecoderProto::AUTO) {
/* Calculate both standard Micropolis (MDOS, CP/M, OASIS) and MZOS checksums */
if (wantChecksum == micropolisChecksum(bytes.slice(1, 2+266))) {
_checksumType = MicropolisDecoderProto::MICROPOLIS;
} else if (wantChecksum == mzosChecksum(bytes.slice(MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE))) {
_checksumType = MicropolisDecoderProto::MZOS;
std::cout << "Note: MZOS checksum detected." << std::endl;
}
}
/* If not specified, automatically determine the checksum type.
* Once the checksum type is determined, it will be used for the
* entire disk.
*/
if (_checksumType == MicropolisDecoderProto::AUTO)
{
/* Calculate both standard Micropolis (MDOS, CP/M, OASIS) and MZOS
* checksums */
if (wantChecksum == micropolisChecksum(bytes.slice(1, 2 + 266)))
{
_checksumType = MicropolisDecoderProto::MICROPOLIS;
}
else if (wantChecksum ==
mzosChecksum(bytes.slice(
MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE)))
{
_checksumType = MicropolisDecoderProto::MZOS;
std::cout << "Note: MZOS checksum detected." << std::endl;
}
}
uint8_t gotChecksum;
uint8_t gotChecksum;
if (_checksumType == MicropolisDecoderProto::MZOS) {
gotChecksum = mzosChecksum(bytes.slice(MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE));
} else {
gotChecksum = micropolisChecksum(bytes.slice(1, 2+266));
}
if (_checksumType == MicropolisDecoderProto::MZOS)
{
gotChecksum = mzosChecksum(
bytes.slice(MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE));
}
else
{
gotChecksum = micropolisChecksum(bytes.slice(1, 2 + 266));
}
br.read(5); /* 4 byte ECC and ECC-present flag */
br.read(5); /* 4 byte ECC and ECC-present flag */
if (_config.sector_output_size() == MICROPOLIS_PAYLOAD_SIZE)
_sector->data = data;
else if (_config.sector_output_size() == MICROPOLIS_ENCODED_SECTOR_SIZE)
_sector->data = bytes;
else
Error() << "Sector output size may only be 256 or 275";
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
if (_config.sector_output_size() == MICROPOLIS_PAYLOAD_SIZE)
_sector->data = data;
else if (_config.sector_output_size() == MICROPOLIS_ENCODED_SECTOR_SIZE)
_sector->data = bytes;
else
error("Sector output size may only be 256 or 275");
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
private:
const MicropolisDecoderProto& _config;
MicropolisDecoderProto_ChecksumType _checksumType; /* -1 = auto, 1 = Micropolis, 2=MZOS */
const MicropolisDecoderProto& _config;
MicropolisDecoderProto_ChecksumType
_checksumType; /* -1 = auto, 1 = Micropolis, 2=MZOS */
};
std::unique_ptr<Decoder> createMicropolisDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new MicropolisDecoder(config));
return std::unique_ptr<Decoder>(new MicropolisDecoder(config));
}

View File

@@ -12,7 +12,7 @@ static void write_sector(std::vector<bool>& bits,
{
if ((sector->data.size() != 256) &&
(sector->data.size() != MICROPOLIS_ENCODED_SECTOR_SIZE))
Error() << "unsupported sector size --- you must pick 256 or 275";
error("unsupported sector size --- you must pick 256 or 275");
int fullSectorSize = 40 + MICROPOLIS_ENCODED_SECTOR_SIZE + 40 + 35;
auto fullSector = std::make_shared<std::vector<uint8_t>>();
@@ -24,8 +24,9 @@ static void write_sector(std::vector<bool>& bits,
if (sector->data.size() == MICROPOLIS_ENCODED_SECTOR_SIZE)
{
if (sector->data[0] != 0xFF)
Error() << "275 byte sector doesn't start with sync byte 0xFF. "
"Corrupted sector";
error(
"275 byte sector doesn't start with sync byte 0xFF. "
"Corrupted sector");
uint8_t wantChecksum = sector->data[1 + 2 + 266];
uint8_t gotChecksum =
micropolisChecksum(sector->data.slice(1, 2 + 266));
@@ -57,7 +58,7 @@ static void write_sector(std::vector<bool>& bits,
fullSector->push_back(0);
if (fullSector->size() != fullSectorSize)
Error() << "sector mismatched length";
error("sector mismatched length");
bool lastBit = false;
encodeMfm(bits, cursor, fullSector, lastBit);
/* filler */
@@ -91,12 +92,11 @@ public:
write_sector(bits, cursor, sectorData);
if (cursor != bits.size())
Error() << "track data mismatched length";
error("track data mismatched length");
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(
_config.clock_period_us() * 1e3,
calculatePhysicalClockPeriod(_config.clock_period_us() * 1e3,
_config.rotational_period_ms() * 1e6));
return fluxmap;
}
@@ -105,8 +105,7 @@ private:
const MicropolisEncoderProto& _config;
};
std::unique_ptr<Encoder> createMicropolisEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createMicropolisEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new MicropolisEncoder(config));
}

View File

@@ -1,17 +1,20 @@
#ifndef MICROPOLIS_H
#define MICROPOLIS_H
#define MICROPOLIS_PAYLOAD_SIZE (256)
#define MICROPOLIS_HEADER_SIZE (1+2+10)
#define MICROPOLIS_ENCODED_SECTOR_SIZE (MICROPOLIS_HEADER_SIZE + MICROPOLIS_PAYLOAD_SIZE + 6)
#define MICROPOLIS_PAYLOAD_SIZE (256)
#define MICROPOLIS_HEADER_SIZE (1 + 2 + 10)
#define MICROPOLIS_ENCODED_SECTOR_SIZE \
(MICROPOLIS_HEADER_SIZE + MICROPOLIS_PAYLOAD_SIZE + 6)
class Decoder;
class Encoder;
class EncoderProto;
class DecoderProto;
extern std::unique_ptr<Decoder> createMicropolisDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createMicropolisEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createMicropolisDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createMicropolisEncoder(
const EncoderProto& config);
extern uint8_t micropolisChecksum(const Bytes& bytes);

View File

@@ -19,6 +19,6 @@ message MicropolisEncoderProto {
optional double clock_period_us = 1
[ default = 2.0, (help) = "clock rate on the real device" ];
optional double rotational_period_ms = 2
[ default = 166.0, (help) = "rotational period on the real device" ];
[ default = 200.0, (help) = "rotational period on the real device" ];
}

View File

@@ -26,52 +26,51 @@ const FluxPattern ID_PATTERN(32, 0xaaaaffaf);
class MxDecoder : public Decoder
{
public:
MxDecoder(const DecoderProto& config):
Decoder(config)
{}
MxDecoder(const DecoderProto& config): Decoder(config) {}
void beginTrack() override
{
_clock = _sector->clock = seekToPattern(ID_PATTERN);
_currentSector = 0;
}
{
_clock = _sector->clock = seekToPattern(ID_PATTERN);
_currentSector = 0;
}
nanoseconds_t advanceToNextRecord() override
{
if (_currentSector == 11)
{
/* That was the last sector on the disk. */
return 0;
}
else
return _clock;
}
{
if (_currentSector == 11)
{
/* That was the last sector on the disk. */
return 0;
}
else
return _clock;
}
void decodeSectorRecord() override
{
/* Skip the ID pattern and track word, which is only present on the
* first sector. We don't trust the track word because some driver
* don't write it correctly. */
{
/* Skip the ID pattern and track word, which is only present on the
* first sector. We don't trust the track word because some driver
* don't write it correctly. */
if (_currentSector == 0)
readRawBits(64);
if (_currentSector == 0)
readRawBits(64);
auto bits = readRawBits((SECTOR_SIZE+2)*16);
auto bytes = decodeFmMfm(bits).slice(0, SECTOR_SIZE+2);
auto bits = readRawBits((SECTOR_SIZE + 2) * 16);
auto bytes = decodeFmMfm(bits).slice(0, SECTOR_SIZE + 2);
uint16_t gotChecksum = 0;
ByteReader br(bytes);
for (int i=0; i<(SECTOR_SIZE/2); i++)
gotChecksum += br.read_be16();
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = 0;
ByteReader br(bytes);
for (int i = 0; i < (SECTOR_SIZE / 2); i++)
gotChecksum += br.read_be16();
uint16_t wantChecksum = br.read_be16();
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _currentSector;
_sector->data = bytes.slice(0, SECTOR_SIZE).swab();
_sector->status = (gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
_currentSector++;
}
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _currentSector;
_sector->data = bytes.slice(0, SECTOR_SIZE).swab();
_sector->status =
(gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
_currentSector++;
}
private:
nanoseconds_t _clock;
@@ -80,7 +79,5 @@ private:
std::unique_ptr<Decoder> createMxDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new MxDecoder(config));
return std::unique_ptr<Decoder>(new MxDecoder(config));
}

View File

@@ -22,7 +22,7 @@
#include "fmt/format.h"
#define MFM_ID 0xaaaaaaaaaaaa5545LL
#define FM_ID 0xaaaaaaaaaaaaffefLL
#define FM_ID 0xaaaaaaaaaaaaffefLL
/*
* MFM sectors have 32 bytes of 00's followed by two sync characters,
* specified in the North Star MDS manual as 0xFBFB.
@@ -44,133 +44,143 @@ static const FluxPattern MFM_PATTERN(64, MFM_ID);
*/
static const FluxPattern FM_PATTERN(64, FM_ID);
const FluxMatchers ANY_SECTOR_PATTERN(
{
&MFM_PATTERN,
&FM_PATTERN,
}
);
const FluxMatchers ANY_SECTOR_PATTERN({
&MFM_PATTERN,
&FM_PATTERN,
});
/* Checksum is initially 0.
* For each data byte, XOR with the current checksum.
* Rotate checksum left, carrying bit 7 to bit 0.
*/
uint8_t northstarChecksum(const Bytes& bytes) {
ByteReader br(bytes);
uint8_t checksum = 0;
uint8_t northstarChecksum(const Bytes& bytes)
{
ByteReader br(bytes);
uint8_t checksum = 0;
while (!br.eof()) {
checksum ^= br.read_8();
checksum = ((checksum << 1) | ((checksum >> 7)));
}
while (!br.eof())
{
checksum ^= br.read_8();
checksum = ((checksum << 1) | ((checksum >> 7)));
}
return checksum;
return checksum;
}
class NorthstarDecoder : public Decoder
{
public:
NorthstarDecoder(const DecoderProto& config):
Decoder(config),
_config(config.northstar())
{}
NorthstarDecoder(const DecoderProto& config):
Decoder(config),
_config(config.northstar())
{
}
/* Search for FM or MFM sector record */
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
/* Search for FM or MFM sector record */
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0) {
seekToIndexMark();
now = tell().ns();
}
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0)
{
seekToIndexMark();
now = tell().ns();
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 21e6)) {
seekToIndexMark();
return 0;
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 21e6))
{
seekToIndexMark();
return 0;
}
int msSinceIndex = std::round(now / 1e6);
int msSinceIndex = std::round(now / 1e6);
/* Note that the seekToPattern ignores the sector pulses, so if
* a sector is not found for some reason, the seek will advance
* past one or more sector pulses. For this reason, calculate
* _hardSectorId after the sector header is found.
*/
nanoseconds_t clock = seekToPattern(ANY_SECTOR_PATTERN);
_sector->headerStartTime = tell().ns();
/* Note that the seekToPattern ignores the sector pulses, so if
* a sector is not found for some reason, the seek will advance
* past one or more sector pulses. For this reason, calculate
* _hardSectorId after the sector header is found.
*/
nanoseconds_t clock = seekToPattern(ANY_SECTOR_PATTERN);
_sector->headerStartTime = tell().ns();
/* Discard a possible partial sector. */
if (_sector->headerStartTime > (getFluxmapDuration() - 21e6)) {
return 0;
}
/* Discard a possible partial sector. */
if (_sector->headerStartTime > (getFluxmapDuration() - 21e6))
{
return 0;
}
int sectorFoundTimeRaw = std::round(_sector->headerStartTime / 1e6);
int sectorFoundTime;
int sectorFoundTimeRaw = std::round(_sector->headerStartTime / 1e6);
int sectorFoundTime;
/* Round time to the nearest 20ms */
if ((sectorFoundTimeRaw % 20) < 10) {
sectorFoundTime = (sectorFoundTimeRaw / 20) * 20;
}
else {
sectorFoundTime = ((sectorFoundTimeRaw + 20) / 20) * 20;
}
/* Round time to the nearest 20ms */
if ((sectorFoundTimeRaw % 20) < 10)
{
sectorFoundTime = (sectorFoundTimeRaw / 20) * 20;
}
else
{
sectorFoundTime = ((sectorFoundTimeRaw + 20) / 20) * 20;
}
/* Calculate the sector ID based on time since the index */
_hardSectorId = (sectorFoundTime / 20) % 10;
/* Calculate the sector ID based on time since the index */
_hardSectorId = (sectorFoundTime / 20) % 10;
return clock;
}
return clock;
}
void decodeSectorRecord() override
{
uint64_t id = toBytes(readRawBits(64)).reader().read_be64();
unsigned recordSize, payloadSize, headerSize;
void decodeSectorRecord() override
{
uint64_t id = toBytes(readRawBits(64)).reader().read_be64();
unsigned recordSize, payloadSize, headerSize;
if (id == MFM_ID) {
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_DD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_DD;
headerSize = NORTHSTAR_HEADER_SIZE_DD;
}
else {
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_SD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_SD;
headerSize = NORTHSTAR_HEADER_SIZE_SD;
}
if (id == MFM_ID)
{
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_DD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_DD;
headerSize = NORTHSTAR_HEADER_SIZE_DD;
}
else
{
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_SD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_SD;
headerSize = NORTHSTAR_HEADER_SIZE_SD;
}
auto rawbits = readRawBits(recordSize * 16);
auto bytes = decodeFmMfm(rawbits).slice(0, recordSize);
ByteReader br(bytes);
auto rawbits = readRawBits(recordSize * 16);
auto bytes = decodeFmMfm(rawbits).slice(0, recordSize);
ByteReader br(bytes);
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _hardSectorId;
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _hardSectorId;
_sector->logicalTrack = _sector->physicalTrack;
if (headerSize == NORTHSTAR_HEADER_SIZE_DD) {
br.read_8(); /* MFM second Sync char, usually 0xFB */
}
if (headerSize == NORTHSTAR_HEADER_SIZE_DD)
{
br.read_8(); /* MFM second Sync char, usually 0xFB */
}
_sector->data = br.read(payloadSize);
uint8_t wantChecksum = br.read_8();
uint8_t gotChecksum = northstarChecksum(bytes.slice(headerSize - 1, payloadSize));
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(payloadSize);
uint8_t wantChecksum = br.read_8();
uint8_t gotChecksum =
northstarChecksum(bytes.slice(headerSize - 1, payloadSize));
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
private:
const NorthstarDecoderProto& _config;
uint8_t _hardSectorId;
const NorthstarDecoderProto& _config;
uint8_t _hardSectorId;
};
std::unique_ptr<Decoder> createNorthstarDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new NorthstarDecoder(config));
return std::unique_ptr<Decoder>(new NorthstarDecoder(config));
}

View File

@@ -49,7 +49,7 @@ static void write_sector(std::vector<bool>& bits,
doubleDensity = true;
break;
default:
Error() << "unsupported sector size --- you must pick 256 or 512";
error("unsupported sector size --- you must pick 256 or 512");
break;
}
@@ -96,9 +96,10 @@ static void write_sector(std::vector<bool>& bits,
fullSector->push_back(GAP2_FILL_BYTE);
if (fullSector->size() != fullSectorSize)
Error() << "sector mismatched length (" << sector->data.size()
<< ") expected: " << fullSector->size() << " got "
<< fullSectorSize;
error("sector mismatched length ({}); expected {}, got {}",
sector->data.size(),
fullSector->size(),
fullSectorSize);
}
else
{
@@ -148,7 +149,7 @@ public:
write_sector(bits, cursor, sectorData);
if (cursor > bits.size())
Error() << "track data overrun";
error("track data overrun");
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
@@ -161,8 +162,7 @@ private:
const NorthstarEncoderProto& _config;
};
std::unique_ptr<Encoder> createNorthstarEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createNorthstarEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new NorthstarEncoder(config));
}

View File

@@ -1,7 +1,8 @@
#ifndef NORTHSTAR_H
#define NORTHSTAR_H
/* Northstar floppies are 10-hard sectored disks with a sector format as follows:
/* Northstar floppies are 10-hard sectored disks with a sector format as
* follows:
*
* |----------------------------------|
* | SYNC Byte | Payload | Checksum |
@@ -12,15 +13,19 @@
*
*/
#define NORTHSTAR_PREAMBLE_SIZE_SD (16)
#define NORTHSTAR_PREAMBLE_SIZE_DD (32)
#define NORTHSTAR_HEADER_SIZE_SD (1)
#define NORTHSTAR_HEADER_SIZE_DD (2)
#define NORTHSTAR_PAYLOAD_SIZE_SD (256)
#define NORTHSTAR_PAYLOAD_SIZE_DD (512)
#define NORTHSTAR_CHECKSUM_SIZE (1)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_SD (NORTHSTAR_HEADER_SIZE_SD + NORTHSTAR_PAYLOAD_SIZE_SD + NORTHSTAR_CHECKSUM_SIZE)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_DD (NORTHSTAR_HEADER_SIZE_DD + NORTHSTAR_PAYLOAD_SIZE_DD + NORTHSTAR_CHECKSUM_SIZE)
#define NORTHSTAR_PREAMBLE_SIZE_SD (16)
#define NORTHSTAR_PREAMBLE_SIZE_DD (32)
#define NORTHSTAR_HEADER_SIZE_SD (1)
#define NORTHSTAR_HEADER_SIZE_DD (2)
#define NORTHSTAR_PAYLOAD_SIZE_SD (256)
#define NORTHSTAR_PAYLOAD_SIZE_DD (512)
#define NORTHSTAR_CHECKSUM_SIZE (1)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_SD \
(NORTHSTAR_HEADER_SIZE_SD + NORTHSTAR_PAYLOAD_SIZE_SD + \
NORTHSTAR_CHECKSUM_SIZE)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_DD \
(NORTHSTAR_HEADER_SIZE_DD + NORTHSTAR_PAYLOAD_SIZE_DD + \
NORTHSTAR_CHECKSUM_SIZE)
class Decoder;
class Encoder;
@@ -29,7 +34,9 @@ class DecoderProto;
extern uint8_t northstarChecksum(const Bytes& bytes);
extern std::unique_ptr<Decoder> createNorthstarDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createNorthstarEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createNorthstarDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createNorthstarEncoder(
const EncoderProto& config);
#endif /* NORTHSTAR */

51
arch/rolandd20/decoder.cc Normal file
View File

@@ -0,0 +1,51 @@
#include "lib/globals.h"
#include "lib/decoders/decoders.h"
#include "lib/crc.h"
#include "lib/fluxmap.h"
#include "lib/decoders/fluxmapreader.h"
#include "lib/sector.h"
#include "lib/bytes.h"
#include "rolandd20.h"
#include <string.h>
/* Sector header record:
*
* BF FF FF FF FF FF FE AB
*
* This encodes to:
*
* e d 5 5 5 5 5 5
* 1110 1101 0101 0101 0101 0101 0101 0101
* 5 5 5 5 5 5 5 5
* 0101 0101 0101 0101 0101 0101 0101 0101
* 5 5 5 5 5 5 5 5
* 0101 0101 0101 0101 0101 0101 0101 0101
* 5 5 5 4 4 4 4 5
* 0101 0101 0101 0100 0100 0100 0100 0101
*/
static const FluxPattern SECTOR_PATTERN(64, 0xed55555555555555LL);
class RolandD20Decoder : public Decoder
{
public:
RolandD20Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_PATTERN);
}
void decodeSectorRecord() override
{
auto rawbits = readRawBits(256);
const auto& bytes = decodeFmMfm(rawbits);
fmt::print("{} ", _sector->clock);
hexdump(std::cout, bytes);
}
};
std::unique_ptr<Decoder> createRolandD20Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new RolandD20Decoder(config));
}

View File

@@ -0,0 +1,4 @@
#pragma once
extern std::unique_ptr<Decoder> createRolandD20Decoder(
const DecoderProto& config);

View File

@@ -0,0 +1,5 @@
syntax = "proto2";
message RolandD20DecoderProto {}

View File

@@ -7,4 +7,3 @@
extern std::unique_ptr<Decoder> createSmaky6Decoder(const DecoderProto& config);
#endif

View File

@@ -38,61 +38,63 @@ const FluxPattern SECTOR_RECORD_PATTERN(32, 0x11112244);
const uint16_t DATA_ID = 0x550b;
const FluxPattern DATA_RECORD_PATTERN(32, 0x11112245);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
class Tids990Decoder : public Decoder
{
public:
Tids990Decoder(const DecoderProto& config):
Decoder(config)
{}
Tids990Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
auto bits = readRawBits(TIDS990_SECTOR_RECORD_SIZE*16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_SECTOR_RECORD_SIZE);
{
auto bits = readRawBits(TIDS990_SECTOR_RECORD_SIZE * 16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_SECTOR_RECORD_SIZE);
ByteReader br(bytes);
if (br.read_be16() != SECTOR_ID)
return;
ByteReader br(bytes);
if (br.read_be16() != SECTOR_ID)
return;
uint16_t gotChecksum = crc16(CCITT_POLY, bytes.slice(1, TIDS990_SECTOR_RECORD_SIZE-3));
uint16_t gotChecksum =
crc16(CCITT_POLY, bytes.slice(1, TIDS990_SECTOR_RECORD_SIZE - 3));
_sector->logicalSide = br.read_8() >> 3;
_sector->logicalTrack = br.read_8();
br.read_8(); /* number of sectors per track */
_sector->logicalSector = br.read_8();
br.read_be16(); /* sector size */
uint16_t wantChecksum = br.read_be16();
_sector->logicalSide = br.read_8() >> 3;
_sector->logicalTrack = br.read_8();
br.read_8(); /* number of sectors per track */
_sector->logicalSector = br.read_8();
br.read_be16(); /* sector size */
uint16_t wantChecksum = br.read_be16();
if (wantChecksum == gotChecksum)
_sector->status = Sector::DATA_MISSING; /* correct but unintuitive */
}
if (wantChecksum == gotChecksum)
_sector->status =
Sector::DATA_MISSING; /* correct but unintuitive */
}
void decodeDataRecord() override
{
auto bits = readRawBits(TIDS990_DATA_RECORD_SIZE*16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_DATA_RECORD_SIZE);
void decodeDataRecord() override
{
auto bits = readRawBits(TIDS990_DATA_RECORD_SIZE * 16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_DATA_RECORD_SIZE);
ByteReader br(bytes);
if (br.read_be16() != DATA_ID)
return;
ByteReader br(bytes);
if (br.read_be16() != DATA_ID)
return;
uint16_t gotChecksum = crc16(CCITT_POLY, bytes.slice(1, TIDS990_DATA_RECORD_SIZE-3));
uint16_t gotChecksum =
crc16(CCITT_POLY, bytes.slice(1, TIDS990_DATA_RECORD_SIZE - 3));
_sector->data = br.read(TIDS990_PAYLOAD_SIZE);
uint16_t wantChecksum = br.read_be16();
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(TIDS990_PAYLOAD_SIZE);
uint16_t wantChecksum = br.read_be16();
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createTids990Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Tids990Decoder(config));
return std::unique_ptr<Decoder>(new Tids990Decoder(config));
}

View File

@@ -127,14 +127,14 @@ public:
}
if (_cursor >= _bits.size())
Error() << "track data overrun";
error("track data overrun");
while (_cursor < _bits.size())
writeBytes(1, 0x55);
auto fluxmap = std::make_unique<Fluxmap>();
fluxmap->appendBits(_bits,
calculatePhysicalClockPeriod(clockRateUs * 1e3,
_config.rotational_period_ms() * 1e6));
calculatePhysicalClockPeriod(
clockRateUs * 1e3, _config.rotational_period_ms() * 1e6));
return fluxmap;
}
@@ -145,8 +145,7 @@ private:
bool _lastBit;
};
std::unique_ptr<Encoder> createTids990Encoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createTids990Encoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new Tids990Encoder(config));
}

View File

@@ -1,18 +1,18 @@
#ifndef TIDS990_H
#define TIDS990_H
#define TIDS990_PAYLOAD_SIZE 288 /* bytes */
#define TIDS990_SECTOR_RECORD_SIZE 10 /* bytes */
#define TIDS990_DATA_RECORD_SIZE (TIDS990_PAYLOAD_SIZE + 4) /* bytes */
#define TIDS990_PAYLOAD_SIZE 288 /* bytes */
#define TIDS990_SECTOR_RECORD_SIZE 10 /* bytes */
#define TIDS990_DATA_RECORD_SIZE (TIDS990_PAYLOAD_SIZE + 4) /* bytes */
class Encoder;
class Decoder;
class DecoderProto;
class EncoderProto;
extern std::unique_ptr<Decoder> createTids990Decoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createTids990Encoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createTids990Decoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createTids990Encoder(
const EncoderProto& config);
#endif

View File

@@ -13,16 +13,18 @@
const FluxPattern SECTOR_RECORD_PATTERN(32, VICTOR9K_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(32, VICTOR9K_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
@@ -37,11 +39,11 @@ static Bytes decode(const std::vector<bool>& bits)
while (ii != bits.end())
{
uint8_t inputfifo = 0;
for (size_t i=0; i<5; i++)
for (size_t i = 0; i < 5; i++)
{
if (ii == bits.end())
break;
inputfifo = (inputfifo<<1) | *ii++;
inputfifo = (inputfifo << 1) | *ii++;
}
uint8_t decoded = decode_data_gcr(inputfifo);
@@ -55,63 +57,62 @@ static Bytes decode(const std::vector<bool>& bits)
class Victor9kDecoder : public Decoder
{
public:
Victor9kDecoder(const DecoderProto& config):
Decoder(config)
{}
Victor9kDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
/* Check the ID. */
{
/* Check the ID. */
if (readRaw32() != VICTOR9K_SECTOR_RECORD)
return;
if (readRaw32() != VICTOR9K_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
auto bytes = decode(readRawBits(3*10)).slice(0, 3);
auto bytes = decode(readRawBits(3 * 10)).slice(0, 3);
uint8_t rawTrack = bytes[0];
_sector->logicalSector = bytes[1];
uint8_t gotChecksum = bytes[2];
uint8_t rawTrack = bytes[0];
_sector->logicalSector = bytes[1];
uint8_t gotChecksum = bytes[2];
_sector->logicalTrack = rawTrack & 0x7f;
_sector->logicalSide = rawTrack >> 7;
uint8_t wantChecksum = bytes[0] + bytes[1];
if ((_sector->logicalSector > 20) || (_sector->logicalTrack > 85) || (_sector->logicalSide > 1))
return;
if (wantChecksum == gotChecksum)
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
_sector->logicalTrack = rawTrack & 0x7f;
_sector->logicalSide = rawTrack >> 7;
uint8_t wantChecksum = bytes[0] + bytes[1];
if ((_sector->logicalSector > 20) || (_sector->logicalTrack > 85) ||
(_sector->logicalSide > 1))
return;
if (wantChecksum == gotChecksum)
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
/* Check the ID. */
{
/* Check the ID. */
if (readRaw32() != VICTOR9K_DATA_RECORD)
return;
if (readRaw32() != VICTOR9K_DATA_RECORD)
return;
/* Read data. */
/* Read data. */
auto bytes = decode(readRawBits((VICTOR9K_SECTOR_LENGTH+4)*10))
.slice(0, VICTOR9K_SECTOR_LENGTH+4);
ByteReader br(bytes);
auto bytes = decode(readRawBits((VICTOR9K_SECTOR_LENGTH + 4) * 10))
.slice(0, VICTOR9K_SECTOR_LENGTH + 4);
ByteReader br(bytes);
_sector->data = br.read(VICTOR9K_SECTOR_LENGTH);
uint16_t gotChecksum = sumBytes(_sector->data);
uint16_t wantChecksum = br.read_le16();
_sector->status = (gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(VICTOR9K_SECTOR_LENGTH);
uint16_t gotChecksum = sumBytes(_sector->data);
uint16_t wantChecksum = br.read_le16();
_sector->status =
(gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createVictor9kDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Victor9kDecoder(config));
return std::unique_ptr<Decoder>(new Victor9kDecoder(config));
}

View File

@@ -169,14 +169,15 @@ public:
const Image& image) override
{
Victor9kEncoderProto::TrackdataProto trackdata;
getTrackFormat(trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
getTrackFormat(
trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
unsigned bitsPerRevolution = (trackdata.rotational_period_ms() * 1e3) /
trackdata.clock_period_us();
std::vector<bool> bits(bitsPerRevolution);
nanoseconds_t clockPeriod = calculatePhysicalClockPeriod(
trackdata.clock_period_us() * 1e3,
trackdata.rotational_period_ms() * 1e6);
nanoseconds_t clockPeriod =
calculatePhysicalClockPeriod(trackdata.clock_period_us() * 1e3,
trackdata.rotational_period_ms() * 1e6);
unsigned cursor = 0;
fillBitmapTo(bits,
@@ -189,8 +190,7 @@ public:
write_sector(bits, cursor, trackdata, *sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -202,8 +202,7 @@ private:
const Victor9kEncoderProto& _config;
};
std::unique_ptr<Encoder> createVictor9kEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createVictor9kEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new Victor9kEncoder(config));
}

View File

@@ -13,12 +13,14 @@ class DecoderProto;
/* ... 1101 0100 1001
* ^^ ^^^^ ^^^^ ten bit IO byte */
#define VICTOR9K_DATA_RECORD 0xfffffd49
#define VICTOR9K_DATA_RECORD 0xfffffd49
#define VICTOR9K_DATA_ID 0x8
#define VICTOR9K_SECTOR_LENGTH 512
extern std::unique_ptr<Decoder> createVictor9kDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createVictor9kEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createVictor9kDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createVictor9kEncoder(
const EncoderProto& config);
#endif

View File

@@ -16,42 +16,40 @@ static const FluxPattern SECTOR_START_PATTERN(16, 0xaaab);
class ZilogMczDecoder : public Decoder
{
public:
ZilogMczDecoder(const DecoderProto& config):
Decoder(config)
{}
ZilogMczDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
seekToIndexMark();
return seekToPattern(SECTOR_START_PATTERN);
}
{
seekToIndexMark();
return seekToPattern(SECTOR_START_PATTERN);
}
void decodeSectorRecord() override
{
readRawBits(14);
{
readRawBits(14);
auto rawbits = readRawBits(140*16);
auto bytes = decodeFmMfm(rawbits).slice(0, 140);
ByteReader br(bytes);
auto rawbits = readRawBits(140 * 16);
auto bytes = decodeFmMfm(rawbits).slice(0, 140);
ByteReader br(bytes);
_sector->logicalSector = br.read_8() & 0x1f;
_sector->logicalSide = 0;
_sector->logicalTrack = br.read_8() & 0x7f;
if (_sector->logicalSector > 31)
return;
if (_sector->logicalTrack > 80)
return;
_sector->logicalSector = br.read_8() & 0x1f;
_sector->logicalSide = 0;
_sector->logicalTrack = br.read_8() & 0x7f;
if (_sector->logicalSector > 31)
return;
if (_sector->logicalTrack > 80)
return;
_sector->data = br.read(132);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(MODBUS_POLY, 0x0000, bytes.slice(0, 134));
_sector->data = br.read(132);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(MODBUS_POLY, 0x0000, bytes.slice(0, 134));
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createZilogMczDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new ZilogMczDecoder(config));
return std::unique_ptr<Decoder>(new ZilogMczDecoder(config));
}

View File

@@ -1,8 +1,7 @@
#ifndef ZILOGMCZ_H
#define ZILOGMCZ_H
extern std::unique_ptr<Decoder> createZilogMczDecoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createZilogMczDecoder(
const DecoderProto& config);
#endif

View File

@@ -55,6 +55,7 @@ proto_cc_library {
"./arch/micropolis/micropolis.proto",
"./arch/mx/mx.proto",
"./arch/northstar/northstar.proto",
"./arch/rolandd20/rolandd20.proto",
"./arch/tids990/tids990.proto",
"./arch/victor9k/victor9k.proto",
"./arch/zilogmcz/zilogmcz.proto",
@@ -93,6 +94,7 @@ clibrary {
"./arch/mx/decoder.cc",
"./arch/northstar/decoder.cc",
"./arch/northstar/encoder.cc",
"./arch/rolandd20/rolandd20.cc",
"./arch/tids990/decoder.cc",
"./arch/tids990/encoder.cc",
"./arch/victor9k/decoder.cc",

View File

@@ -17,6 +17,6 @@ $(ADFLIB_OBJS): CFLAGS += -Idep/adflib/src -Idep/adflib
ADFLIB_LIB = $(OBJDIR)/libadflib.a
$(ADFLIB_LIB): $(ADFLIB_OBJS)
ADFLIB_CFLAGS = -Idep/adflib/src
ADFLIB_LDFLAGS = $(ADFLIB_LIB)
ADFLIB_LDFLAGS =
OBJS += $(ADFLIB_OBJS)

View File

@@ -8,6 +8,6 @@ $(FATFS_OBJS): CFLAGS += -Idep/fatfs/source
FATFS_LIB = $(OBJDIR)/libfatfs.a
$(FATFS_LIB): $(FATFS_OBJS)
FATFS_CFLAGS = -Idep/fatfs/source
FATFS_LDFLAGS = $(FATFS_LIB)
FATFS_LDFLAGS =
OBJS += $(FATFS_OBJS)

View File

@@ -17,6 +17,6 @@ $(HFSUTILS_OBJS): CFLAGS += -Idep/hfsutils/libhfs
HFSUTILS_LIB = $(OBJDIR)/libhfsutils.a
$(HFSUTILS_LIB): $(HFSUTILS_OBJS)
HFSUTILS_CFLAGS = -Idep/hfsutils/libhfs
HFSUTILS_LDFLAGS = $(HFSUTILS_LIB)
HFSUTILS_LDFLAGS =
OBJS += $(HFSUTILS_OBJS)

View File

@@ -1,4 +1,4 @@
cmake_minimum_required (VERSION 2.8.11)
cmake_minimum_required (VERSION 3.10.0)
# Fix behavior of CMAKE_CXX_STANDARD when targeting macOS.
if (POLICY CMP0025)
@@ -18,7 +18,7 @@ endif ()
project (libusbp)
set (LIBUSBP_VERSION_MAJOR 1)
set (LIBUSBP_VERSION_MINOR 2)
set (LIBUSBP_VERSION_MINOR 3)
set (LIBUSBP_VERSION_PATCH 0)
# Make 'Release' be the default build type, since the debug builds
@@ -49,28 +49,8 @@ set(VBOX_LINUX_ON_WINDOWS FALSE CACHE BOOL
set(ENABLE_GCOV FALSE CACHE BOOL
"Compile with special options needed for gcov.")
# Our C code uses features from the C99 standard.
macro(use_c99)
if (CMAKE_VERSION VERSION_LESS "3.1")
if (CMAKE_C_COMPILER_ID STREQUAL "GNU")
set (CMAKE_C_FLAGS "--std=gnu99 ${CMAKE_C_FLAGS}")
endif ()
else ()
set (CMAKE_C_STANDARD 99)
endif ()
endmacro(use_c99)
# Our C++ code uses features from the C++11 standard.
macro(use_cxx11)
if (CMAKE_VERSION VERSION_LESS "3.1")
if (CMAKE_C_COMPILER_ID STREQUAL "GNU")
# Use --std=gnu++0x instead of --std=gnu++11 in order to support GCC 4.6.
set (CMAKE_CXX_FLAGS "--std=gnu++0x ${CMAKE_C_FLAGS}")
endif ()
else ()
set (CMAKE_CXX_STANDARD 11)
endif ()
endmacro(use_cxx11)
set (CMAKE_C_STANDARD 99)
set (CMAKE_CXX_STANDARD 11)
set (LIBUSBP_VERSION ${LIBUSBP_VERSION_MAJOR}.${LIBUSBP_VERSION_MINOR}.${LIBUSBP_VERSION_PATCH})

View File

@@ -1,7 +1,5 @@
# libusbp: Pololu USB Library
Version: 1.2.0<br/>
Release date: 2020-11-16<br/>
[www.pololu.com](https://www.pololu.com/)
The **Pololu USB Library** (also known as **libusbp**) is a cross-platform C library for accessing USB devices.
@@ -17,7 +15,7 @@ The **Pololu USB Library** (also known as **libusbp**) is a cross-platform C lib
- Provides detailed error information to the caller.
- Each error includes one or more English sentences describing the error, including error codes from underlying APIs.
- Some errors have libusbp-defined error codes that can be used to programmatically decide how to handle the error.
- Provides an object-oriented C++ wrapper (using features of C++11).
- Provides an object-oriented C++ wrapper.
- Provides access to underlying identifiers, handles, and file descriptors.
@@ -139,9 +137,9 @@ If you are using GCC and a shell that supports Bash-like syntax, here is an exam
gcc program.c `pkg-config --cflags --libs libusbp-1`
Here is an equivalent command for C++. Note that we use the `--std=gnu++11` option because the libusbp C++ API requires features from C++11:
Here is an equivalent command for C++:
g++ --std=gnu++11 program.cpp `pkg-config --cflags --libs libusbp-1`
g++ program.cpp `pkg-config --cflags --libs libusbp-1`
The order of the arguments above matters: the user program must come before libusbp because it relies on symbols that are defined by libusbp.
@@ -167,6 +165,9 @@ For detailed documentation of this library, see the header files `libusb.h` and
## Version history
* 1.3.0 (2023-01-02):
* Windows: Added support for detecting FTDI serial ports. (FTDI devices with more than one port have not been tested and the interface for detecting them might change in the future.)
* macOS: Fixed the detection of serial ports for devices that are not CDC ACM.
* 1.2.0 (2020-11-16):
* Linux: Made the library work with devices attached to the cp210x driver.
* Windows: Made the library work with devices that have lowercase letters in their hardware IDs.

View File

@@ -1,2 +1,3 @@
This was taken from https://github.com/pololu/libusbp on 2021-12-11.
This is version 1.3.0 taken from https://github.com/pololu/libusbp on
2023-05-06.

View File

@@ -54,7 +54,7 @@ LIBUSBP_OBJS = $(patsubst %.c, $(OBJDIR)/%.o, $(LIBUSBP_SRCS))
$(LIBUSBP_OBJS): private CFLAGS += -Idep/libusbp/src -Idep/libusbp/include
LIBUSBP_LIB = $(OBJDIR)/libusbp.a
LIBUSBP_CFLAGS += -Idep/libusbp/include
LIBUSBP_LDFLAGS += $(LIBUSBP_LIB)
LIBUSBP_LDFLAGS +=
$(LIBUSBP_LIB): $(LIBUSBP_OBJS)
OBJS += $(LIBUSBP_OBJS)

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(async_in async_in.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(lsport lsport.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(lsusb lsusb.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(port_name port_name.cpp)
include_directories (

View File

@@ -41,7 +41,6 @@ extern "C" {
#ifdef LIBUSBP_STATIC
# define LIBUSBP_API
#else
#error not static
# ifdef LIBUSBP_EXPORTS
# define LIBUSBP_API LIBUSBP_DLL_EXPORT
# else

View File

@@ -107,7 +107,7 @@ namespace libusbp
{
public:
/*! Constructor that takes a pointer. */
explicit unique_pointer_wrapper(T * p = nullptr) noexcept
explicit unique_pointer_wrapper(T * p = NULL) noexcept
: pointer(p)
{
}
@@ -133,9 +133,9 @@ namespace libusbp
/*! Implicit conversion to bool. Returns true if the underlying pointer
* is not NULL. */
explicit operator bool() const noexcept
operator bool() const noexcept
{
return pointer != nullptr;
return pointer != NULL;
}
/*! Returns the underlying pointer. */
@@ -146,19 +146,19 @@ namespace libusbp
/*! Sets the underlying pointer to the specified value, freeing the
* previous pointer and taking ownership of the specified one. */
void pointer_reset(T * p = nullptr) noexcept
void pointer_reset(T * p = NULL) noexcept
{
pointer_free(pointer);
pointer = p;
}
/*! Releases the pointer, transferring ownership of it to the caller and
* resetting the underlying pointer of this object to nullptr. The caller
* resetting the underlying pointer of this object to NULL. The caller
* is responsible for freeing the returned pointer if it is not NULL. */
T * pointer_release() noexcept
{
T * p = pointer;
pointer = nullptr;
pointer = NULL;
return p;
}
@@ -193,14 +193,14 @@ namespace libusbp
{
public:
/*! Constructor that takes a pointer. */
explicit unique_pointer_wrapper_with_copy(T * p = nullptr) noexcept
explicit unique_pointer_wrapper_with_copy(T * p = NULL) noexcept
: unique_pointer_wrapper<T>(p)
{
}
/*! Move constructor. */
unique_pointer_wrapper_with_copy(
unique_pointer_wrapper_with_copy && other) noexcept = default;
unique_pointer_wrapper_with_copy && other) = default;
/*! Copy constructor */
unique_pointer_wrapper_with_copy(
@@ -228,13 +228,14 @@ namespace libusbp
{
public:
/*! Constructor that takes a pointer. */
explicit error(libusbp_error * p = nullptr) noexcept
explicit error(libusbp_error * p = NULL) noexcept
: unique_pointer_wrapper_with_copy(p)
{
}
/*! Wrapper for libusbp_error_get_message(). */
const char * what() const noexcept override {
virtual const char * what() const noexcept
{
return libusbp_error_get_message(pointer);
}
@@ -255,7 +256,7 @@ namespace libusbp
/*! \cond */
inline void throw_if_needed(libusbp_error * err)
{
if (err != nullptr)
if (err != NULL)
{
throw error(err);
}
@@ -267,7 +268,7 @@ namespace libusbp
{
public:
/*! Constructor that takes a pointer. */
explicit async_in_pipe(libusbp_async_in_pipe * pointer = nullptr)
explicit async_in_pipe(libusbp_async_in_pipe * pointer = NULL)
: unique_pointer_wrapper(pointer)
{
}
@@ -303,8 +304,8 @@ namespace libusbp
bool handle_finished_transfer(void * buffer, size_t * transferred,
error * transfer_error)
{
libusbp_error ** error_out = nullptr;
if (transfer_error != nullptr)
libusbp_error ** error_out = NULL;
if (transfer_error != NULL)
{
transfer_error->pointer_reset();
error_out = transfer_error->pointer_to_pointer_get();
@@ -328,7 +329,7 @@ namespace libusbp
{
public:
/*! Constructor that takes a pointer. */
explicit device(libusbp_device * pointer = nullptr) :
explicit device(libusbp_device * pointer = NULL) :
unique_pointer_wrapper_with_copy(pointer)
{
}
@@ -387,7 +388,7 @@ namespace libusbp
std::vector<device> vector;
for(size_t i = 0; i < size; i++)
{
vector.emplace_back(device_list[i]);
vector.push_back(device(device_list[i]));
}
libusbp_list_free(device_list);
return vector;
@@ -408,13 +409,13 @@ namespace libusbp
public:
/*! Constructor that takes a pointer. This object will free the pointer
* when it is destroyed. */
explicit generic_interface(libusbp_generic_interface * pointer = nullptr)
explicit generic_interface(libusbp_generic_interface * pointer = NULL)
: unique_pointer_wrapper_with_copy(pointer)
{
}
/*! Wrapper for libusbp_generic_interface_create. */
explicit generic_interface(const device & device,
generic_interface(const device & device,
uint8_t interface_number = 0, bool composite = false)
{
throw_if_needed(libusbp_generic_interface_create(
@@ -448,13 +449,13 @@ namespace libusbp
public:
/*! Constructor that takes a pointer. This object will free the pointer
* when it is destroyed. */
explicit generic_handle(libusbp_generic_handle * pointer = nullptr) noexcept
explicit generic_handle(libusbp_generic_handle * pointer = NULL) noexcept
: unique_pointer_wrapper(pointer)
{
}
/*! Wrapper for libusbp_generic_handle_open(). */
explicit generic_handle(const generic_interface & gi)
generic_handle(const generic_interface & gi)
{
throw_if_needed(libusbp_generic_handle_open(gi.pointer_get(), &pointer));
}
@@ -486,9 +487,9 @@ namespace libusbp
uint8_t bRequest,
uint16_t wValue,
uint16_t wIndex,
void * buffer = nullptr,
void * buffer = NULL,
uint16_t wLength = 0,
size_t * transferred = nullptr)
size_t * transferred = NULL)
{
throw_if_needed(libusbp_control_transfer(pointer,
bmRequestType, bRequest, wValue, wIndex,
@@ -542,13 +543,13 @@ namespace libusbp
public:
/*! Constructor that takes a pointer. This object will free the pointer
* when it is destroyed. */
explicit serial_port(libusbp_serial_port * pointer = nullptr)
explicit serial_port(libusbp_serial_port * pointer = NULL)
: unique_pointer_wrapper_with_copy(pointer)
{
}
/*! Wrapper for libusbp_serial_port_create(). */
explicit serial_port(const device & device,
serial_port(const device & device,
uint8_t interface_number = 0, bool composite = false)
{
throw_if_needed(libusbp_serial_port_create(

View File

@@ -1,5 +1,3 @@
use_c99()
add_library (install_helper SHARED install_helper_windows.c dll.def)
target_link_libraries (install_helper setupapi msi)

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(test_async_in test_async_in.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(test_long_read test_long_read.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(test_long_write test_long_write.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_cxx11()
add_executable(test_transitions test_transitions.cpp)
include_directories (

View File

@@ -1,5 +1,3 @@
use_c99()
# Settings for GCC
if (CMAKE_C_COMPILER_ID STREQUAL "GNU")
# By default, symbols are not visible outside of the library.

View File

@@ -124,7 +124,7 @@ libusbp_error * error_add_v(libusbp_error * error, const char * format, va_list
int result = vsnprintf(x, 0, format, ap2);
if (result > 0)
{
outer_message_length = (size_t) result;
outer_message_length = result;
}
va_end(ap2);
}

View File

@@ -37,7 +37,10 @@ libusbp_error * libusbp_find_device_with_vid_pid(
libusbp_device ** new_list = NULL;
size_t size = 0;
error = libusbp_list_connected_devices(&new_list, &size);
if (error == NULL)
{
error = libusbp_list_connected_devices(&new_list, &size);
}
assert(error != NULL || new_list != NULL);

View File

@@ -37,6 +37,7 @@
#include <usbioctl.h>
#include <stringapiset.h>
#include <winusb.h>
#include <ntddmodm.h>
#endif
#ifdef __linux__

View File

@@ -51,11 +51,14 @@ libusbp_error * async_in_transfer_create(
libusbp_error * error = NULL;
// Allocate memory for the transfer struct.
async_in_transfer * new_transfer = calloc(1, sizeof(async_in_transfer));
if (new_transfer == NULL)
async_in_transfer * new_transfer = NULL;
if (error == NULL)
{
error = &error_no_memory;
new_transfer = calloc(1, sizeof(async_in_transfer));
if (new_transfer == NULL)
{
error = &error_no_memory;
}
}
// Allocate memory for the buffer.

View File

@@ -25,9 +25,14 @@ libusbp_error * create_device(io_service_t service, libusbp_device ** device)
assert(service != MACH_PORT_NULL);
assert(device != NULL);
libusbp_error * error = NULL;
// Allocate the device.
libusbp_device * new_device = NULL;
libusbp_error * error = device_allocate(&new_device);
if (error == NULL)
{
error = device_allocate(&new_device);
}
// Get the numeric IDs.
if (error == NULL)
@@ -84,7 +89,10 @@ libusbp_error * libusbp_device_copy(const libusbp_device * source, libusbp_devic
// Allocate the device.
libusbp_device * new_device = NULL;
error = device_allocate(&new_device);
if (error == NULL)
{
error = device_allocate(&new_device);
}
// Copy the simple fields, while leaving the pointers owned by the
// device NULL so that libusbp_device_free is still OK to call.

View File

@@ -63,9 +63,8 @@ static libusbp_error * process_pipe_properties(libusbp_generic_handle * handle)
uint8_t transfer_type;
uint16_t max_packet_size;
uint8_t interval;
kr = (*handle->ioh)->GetPipeProperties(handle->ioh, (UInt8) i,
&direction, &endpoint_number, &transfer_type, &max_packet_size, &interval);
kern_return_t kr = (*handle->ioh)->GetPipeProperties(handle->ioh, i,
&direction, &endpoint_number, &transfer_type, &max_packet_size, &interval);
if (kr != KERN_SUCCESS)
{
return error_create_mach(kr, "Failed to get pipe properties for pipe %d.", i);
@@ -75,11 +74,11 @@ static libusbp_error * process_pipe_properties(libusbp_generic_handle * handle)
{
if (direction)
{
handle->in_pipe_index[endpoint_number] = (uint8_t) i;
handle->in_pipe_index[endpoint_number] = i;
}
else
{
handle->out_pipe_index[endpoint_number] = (uint8_t) i;
handle->out_pipe_index[endpoint_number] = i;
}
}
}
@@ -96,11 +95,14 @@ static libusbp_error * set_configuration(io_service_t service)
// Turn io_service_t into something we can actually use.
IOUSBDeviceInterface ** dev_handle = NULL;
IOCFPlugInInterface ** plug_in = NULL;
error = service_to_interface(service,
kIOUSBDeviceUserClientTypeID,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID197),
(void **)&dev_handle,
&plug_in);
if (error == NULL)
{
error = service_to_interface(service,
kIOUSBDeviceUserClientTypeID,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID197),
(void **)&dev_handle,
&plug_in);
}
uint8_t config_num = 0;
if (error == NULL)
@@ -170,7 +172,10 @@ static libusbp_error * set_configuration_and_get_service(
// Get an io_service_t for the physical device.
io_service_t device_service = MACH_PORT_NULL;
error = service_get_from_id(device_id, &device_service);
if (error == NULL)
{
error = service_get_from_id(device_id, &device_service);
}
// Set the configruation to 1 if it is not set.
if (error == NULL)
@@ -209,11 +214,13 @@ libusbp_error * libusbp_generic_handle_open(
// Allocate memory for the handle.
libusbp_generic_handle * new_handle = NULL;
new_handle = calloc(1, sizeof(libusbp_generic_handle));
if (new_handle == NULL)
if (error == NULL)
{
error = &error_no_memory;
new_handle = calloc(1, sizeof(libusbp_generic_handle));
if (new_handle == NULL)
{
error = &error_no_memory;
}
}
// Get the io_service_t representing the IOUSBInterface.
@@ -323,11 +330,14 @@ libusbp_error * libusbp_generic_handle_set_timeout(
libusbp_error * error = NULL;
error = check_pipe_id(pipe_id);
if (error == NULL)
{
error = check_pipe_id(pipe_id);
}
if (error == NULL)
{
uint8_t endpoint_number = pipe_id & (uint8_t) MAX_ENDPOINT_NUMBER;
uint8_t endpoint_number = pipe_id & MAX_ENDPOINT_NUMBER;
if (pipe_id & 0x80)
{
@@ -401,7 +411,7 @@ libusbp_error * libusbp_read_pipe(
libusbp_error * error = NULL;
if (size == 0)
if (error == NULL && size == 0)
{
error = error_create("Transfer size 0 is not allowed.");
}
@@ -423,12 +433,12 @@ libusbp_error * libusbp_read_pipe(
if (error == NULL)
{
uint8_t endpoint_number = pipe_id & (uint8_t) MAX_ENDPOINT_NUMBER;
uint8_t endpoint_number = pipe_id & MAX_ENDPOINT_NUMBER;
uint32_t no_data_timeout = 0;
uint32_t completion_timeout = handle->in_timeout[endpoint_number];
uint32_t iokit_size = (uint32_t) size;
uint32_t iokit_size = size;
uint32_t pipe_index = handle->in_pipe_index[endpoint_number];
kern_return_t kr = (*handle->ioh)->ReadPipeTO(handle->ioh, (UInt8) pipe_index,
kern_return_t kr = (*handle->ioh)->ReadPipeTO(handle->ioh, pipe_index,
buffer, &iokit_size, no_data_timeout, completion_timeout);
if (transferred != NULL) { *transferred = iokit_size; }
if (kr != KERN_SUCCESS)
@@ -464,7 +474,7 @@ libusbp_error * libusbp_write_pipe(
libusbp_error * error = NULL;
if (size > UINT32_MAX)
if (error == NULL && size > UINT32_MAX)
{
error = error_create("Transfer size is too large.");
}
@@ -481,12 +491,12 @@ libusbp_error * libusbp_write_pipe(
if (error == NULL)
{
uint8_t endpoint_number = pipe_id & (uint8_t) MAX_ENDPOINT_NUMBER;
uint8_t endpoint_number = pipe_id & MAX_ENDPOINT_NUMBER;
uint32_t no_data_timeout = 0;
uint32_t completion_timeout = handle->out_timeout[endpoint_number];
uint32_t pipe_index = handle->out_pipe_index[endpoint_number];
kern_return_t kr = (*handle->ioh)->WritePipeTO(handle->ioh, (UInt8) pipe_index,
(void *)buffer, (UInt32) size, no_data_timeout, completion_timeout);
kern_return_t kr = (*handle->ioh)->WritePipeTO(handle->ioh, pipe_index,
(void *)buffer, size, no_data_timeout, completion_timeout);
if (kr != KERN_SUCCESS)
{
error = error_create_mach(kr, "");
@@ -588,7 +598,7 @@ IOUSBInterfaceInterface182 ** generic_handle_get_ioh(const libusbp_generic_handl
uint8_t generic_handle_get_pipe_index(const libusbp_generic_handle * handle, uint8_t pipe_id)
{
uint8_t endpoint_number = pipe_id & (uint8_t) MAX_ENDPOINT_NUMBER;
uint8_t endpoint_number = pipe_id & MAX_ENDPOINT_NUMBER;
if (pipe_id & 0x80)
{
return handle->in_pipe_index[endpoint_number];

View File

@@ -82,7 +82,10 @@ libusbp_error * libusbp_generic_interface_create(
{
// Get an io_service_t for the physical device.
io_service_t device_service = MACH_PORT_NULL;
error = service_get_from_id(new_gi->device_id, &device_service);
if (error == NULL)
{
error = service_get_from_id(new_gi->device_id, &device_service);
}
// Get the io_service_t for the interface.
io_service_t interface_service = MACH_PORT_NULL;
@@ -146,7 +149,10 @@ libusbp_error * libusbp_generic_interface_copy(
// Allocate the generic interface.
libusbp_generic_interface * new_gi = NULL;
error = generic_interface_allocate(&new_gi);
if (error == NULL)
{
error = generic_interface_allocate(&new_gi);
}
// Copy the simple fields.
if (error == NULL)

View File

@@ -42,12 +42,14 @@ libusbp_error * service_get_usb_interface(io_service_t service,
libusbp_error * error = NULL;
io_iterator_t iterator = MACH_PORT_NULL;
kern_return_t result = IORegistryEntryGetChildIterator(
service, kIOServicePlane, &iterator);
if (result != KERN_SUCCESS)
if (error == NULL)
{
error = error_create_mach(result, "Failed to get child iterator.");
kern_return_t result = IORegistryEntryGetChildIterator(
service, kIOServicePlane, &iterator);
if (result != KERN_SUCCESS)
{
error = error_create_mach(result, "Failed to get child iterator.");
}
}
// Loop through the devices to find the right one.
@@ -57,7 +59,7 @@ libusbp_error * service_get_usb_interface(io_service_t service,
if (candidate == MACH_PORT_NULL) { break; }
// Filter out candidates that are not of class IOUSBInterface.
bool conforms = (bool) IOObjectConformsTo(candidate, kIOUSBInterfaceClassName);
bool conforms = IOObjectConformsTo(candidate, kIOUSBInterfaceClassName);
if (!conforms)
{
IOObjectRelease(candidate);
@@ -88,54 +90,6 @@ libusbp_error * service_get_usb_interface(io_service_t service,
return error;
}
libusbp_error * service_get_child_by_class(io_service_t service,
const char * class_name, io_service_t * interface_service)
{
assert(service != MACH_PORT_NULL);
assert(interface_service != NULL);
*interface_service = MACH_PORT_NULL;
libusbp_error * error = NULL;
io_iterator_t iterator = MACH_PORT_NULL;
kern_return_t result = IORegistryEntryCreateIterator(
service, kIOServicePlane, kIORegistryIterateRecursively, &iterator);
if (result != KERN_SUCCESS)
{
error = error_create_mach(result, "Failed to get recursive iterator.");
}
// Loop through the devices to find the right one.
while (error == NULL)
{
io_service_t candidate = IOIteratorNext(iterator);
if (candidate == MACH_PORT_NULL) { break; }
// Filter out candidates that are not the right class.
bool conforms = (bool) IOObjectConformsTo(candidate, class_name);
if (!conforms)
{
IOObjectRelease(candidate);
continue;
}
// This is the right one. Pass it to the caller.
*interface_service = candidate;
break;
}
if (error == NULL && *interface_service == MACH_PORT_NULL)
{
error = error_create("Could not find entry with class %s.", class_name);
error = error_add_code(error, LIBUSBP_ERROR_NOT_READY);
}
if (iterator != MACH_PORT_NULL) { IOObjectRelease(iterator); }
return error;
}
libusbp_error * service_to_interface(
io_service_t service,
CFUUIDRef pluginType,
@@ -154,13 +108,15 @@ libusbp_error * service_to_interface(
// Create the plug-in interface.
IOCFPlugInInterface ** new_plug_in = NULL;
kern_return_t kr = IOCreatePlugInInterfaceForService(service,
pluginType, kIOCFPlugInInterfaceID,
&new_plug_in, &score);
if (kr != KERN_SUCCESS)
if (error == NULL)
{
error = error_create_mach(kr, "Failed to create plug-in interface.");
kern_return_t kr = IOCreatePlugInInterfaceForService(service,
pluginType, kIOCFPlugInInterfaceID,
&new_plug_in, &score);
if (kr != KERN_SUCCESS)
{
error = error_create_mach(kr, "Failed to create plug-in interface.");
}
}
// Create the device interface and pass it to the caller.
@@ -223,7 +179,7 @@ libusbp_error * get_string(io_registry_entry_t entry, CFStringRef name, char **
libusbp_error * error = NULL;
if (CFGetTypeID(cf_value) != CFStringGetTypeID())
if (error == NULL && CFGetTypeID(cf_value) != CFStringGetTypeID())
{
error = error_create("Property is not a string.");
}
@@ -244,7 +200,7 @@ libusbp_error * get_string(io_registry_entry_t entry, CFStringRef name, char **
error = string_copy(buffer, value);
}
CFRelease(cf_value);
if (cf_value != NULL) { CFRelease(cf_value); }
return error;
}
@@ -258,11 +214,14 @@ libusbp_error * get_int32(io_registry_entry_t entry, CFStringRef name, int32_t *
libusbp_error * error = NULL;
CFTypeRef cf_value = IORegistryEntryCreateCFProperty(entry, name, kCFAllocatorDefault, 0);
if (cf_value == NULL)
CFTypeRef cf_value = NULL;
if (error == NULL)
{
error = error_create("Failed to get int32 property from IORegistryEntry.");
cf_value = IORegistryEntryCreateCFProperty(entry, name, kCFAllocatorDefault, 0);
if (cf_value == NULL)
{
error = error_create("Failed to get int32 property from IORegistryEntry.");
}
}
if (error == NULL && CFGetTypeID(cf_value) != CFNumberGetTypeID())
@@ -292,13 +251,16 @@ libusbp_error * get_uint16(io_registry_entry_t entry, CFStringRef name, uint16_t
libusbp_error * error = NULL;
int32_t tmp;
error = get_int32(entry, name, &tmp);
if (error == NULL)
{
error = get_int32(entry, name, &tmp);
}
if (error == NULL)
{
// There is an unchecked conversion of an int32_t to a uint16_t here but
// There is an implicit conversion of an int32_t to a uint16_t here but
// we don't expect any data to be lost.
*value = (uint16_t) tmp;
*value = tmp;
}
return error;

View File

@@ -37,10 +37,13 @@ libusbp_error * libusbp_list_connected_devices(
// Create a dictionary that says "IOProviderClass" => "IOUSBDevice"
// This dictionary is CFReleased by IOServiceGetMatchingServices.
CFMutableDictionaryRef dict = NULL;
dict = IOServiceMatching("IOUSBHostDevice");
if (dict == NULL)
if (error == NULL)
{
error = error_create("IOServiceMatching returned null.");
dict = IOServiceMatching("IOUSBHostDevice");
if (dict == NULL)
{
error = error_create("IOServiceMatching returned null.");
}
}
// Create an iterator for all the connected USB devices.

View File

@@ -2,9 +2,6 @@
struct libusbp_serial_port
{
// The I/O Registry ID of the IOBSDSerialClient.
uint64_t id;
// A port filename like "/dev/cu.usbmodemFD123".
char * port_name;
};
@@ -29,23 +26,16 @@ libusbp_error * libusbp_serial_port_create(
return error_create("Device is null.");
}
// Add one to the interface number because that is what we need for the
// typical case: The user specifies the lower of the two interface numbers,
// which corresponds to the control interface of a CDC ACM device. We
// actually need the data interface because that is the one that the
// IOSerialBSDClient lives under. If this +1 causes any problems, it is
// easy for the user to address it using an an ifdef. Also, we might make
// this function more flexible in the future if we need to handle different
// types of serial devices with different drivers or interface layouts.
interface_number += 1;
libusbp_error * error = NULL;
libusbp_serial_port * new_port = calloc(1, sizeof(libusbp_serial_port));
if (new_port == NULL)
libusbp_serial_port * new_port = NULL;
if (error == NULL)
{
error = &error_no_memory;
new_port = calloc(1, sizeof(libusbp_serial_port));
if (new_port == NULL)
{
error = &error_no_memory;
}
}
// Get the ID for the physical device.
@@ -62,19 +52,67 @@ libusbp_error * libusbp_serial_port_create(
error = service_get_from_id(device_id, &device_service);
}
// Get an io_service_t for the interface.
io_service_t interface_service = MACH_PORT_NULL;
io_iterator_t iterator = MACH_PORT_NULL;
if (error == NULL)
{
error = service_get_usb_interface(device_service, interface_number, &interface_service);
kern_return_t result = IORegistryEntryCreateIterator(
device_service, kIOServicePlane, kIORegistryIterateRecursively, &iterator);
if (result != KERN_SUCCESS)
{
error = error_create_mach(result, "Failed to get recursive iterator.");
}
}
// Get an io_service_t for the IOSerialBSDClient
io_service_t serial_service = MACH_PORT_NULL;
if (error == NULL)
int32_t current_interface = -1;
int32_t last_acm_control_interface_with_no_port = -1;
int32_t last_acm_data_interface = -1;
while (error == NULL)
{
error = service_get_child_by_class(interface_service,
kIOSerialBSDServiceValue, &serial_service);
io_service_t service = IOIteratorNext(iterator);
if (service == MACH_PORT_NULL) { break; }
if (IOObjectConformsTo(service, kIOUSBHostInterfaceClassName))
{
error = get_int32(service, CFSTR("bInterfaceNumber"), &current_interface);
}
else if (IOObjectConformsTo(service, "AppleUSBACMControl"))
{
last_acm_control_interface_with_no_port = current_interface;
}
else if (IOObjectConformsTo(service, "AppleUSBACMData"))
{
last_acm_data_interface = current_interface;
}
else if (IOObjectConformsTo(service, kIOSerialBSDServiceValue))
{
int32_t fixed_interface = current_interface;
if (last_acm_data_interface == current_interface &&
last_acm_control_interface_with_no_port >= 0)
{
// We found an ACM control interface with no serial port, then
// an ACM data interface with a serial port. For consistency with
// other operating systems, we will consider this serial port to
// actually be associated with the control interface instead of the
// data interface.
fixed_interface = last_acm_control_interface_with_no_port;
}
last_acm_control_interface_with_no_port = -1;
if (fixed_interface == interface_number)
{
// We found the serial port the user is looking for.
serial_service = service;
break;
}
}
IOObjectRelease(service);
}
if (error == NULL && serial_service == MACH_PORT_NULL)
{
error = error_create("Could not find entry with class IOSerialBSDClient.");
error = error_add_code(error, LIBUSBP_ERROR_NOT_READY);
}
// Get the port name.
@@ -91,7 +129,7 @@ libusbp_error * libusbp_serial_port_create(
}
if (serial_service != MACH_PORT_NULL) { IOObjectRelease(serial_service); }
if (interface_service != MACH_PORT_NULL) { IOObjectRelease(interface_service); }
if (iterator != MACH_PORT_NULL) { IOObjectRelease(iterator); }
if (device_service != MACH_PORT_NULL) { IOObjectRelease(device_service); }
libusbp_serial_port_free(new_port);
@@ -125,11 +163,14 @@ libusbp_error * libusbp_serial_port_copy(const libusbp_serial_port * source,
libusbp_error * error = NULL;
// Allocate memory for the new object.
libusbp_serial_port * new_port = calloc(1, sizeof(libusbp_serial_port));
if (new_port == NULL)
libusbp_serial_port * new_port = NULL;
if (error == NULL)
{
error = &error_no_memory;
new_port = calloc(1, sizeof(libusbp_serial_port));
if (new_port == NULL)
{
error = &error_no_memory;
}
}
// Copy the port name.

View File

@@ -113,42 +113,79 @@ static libusbp_error * get_interface_composite(
return error;
}
// Get a list of all the USB-related devices.
HDEVINFO new_list = SetupDiGetClassDevs(NULL, "USB", NULL,
DIGCF_ALLCLASSES | DIGCF_PRESENT);
if (new_list == INVALID_HANDLE_VALUE)
{
return error_create_winapi(
"Failed to get list of all USB devices while finding an interface.");
}
unsigned int list_index = 0;
HDEVINFO new_list = INVALID_HANDLE_VALUE;
DWORD i = 0;
// Iterate through the list until we find a device whose
// Iterate through various device lists until we find a device whose
// parent device is ours and which controls the interface
// specified by the caller.
for (DWORD i = 0; ; i++)
while (true)
{
if (new_list == INVALID_HANDLE_VALUE)
{
if (list_index == 0)
{
// Get a list of all the USB-related devices.
// It includes native USB interfaces and usbser.sys ports but
// not FTDI ports.
new_list = SetupDiGetClassDevs(NULL,
"USB", NULL, DIGCF_ALLCLASSES | DIGCF_PRESENT);
}
else if (list_index == 1)
{
// Get a list of all the COM port devices.
// This includes FTDI and usbser.sys ports.
new_list = SetupDiGetClassDevs(&GUID_DEVINTERFACE_COMPORT,
NULL, NULL, DIGCF_PRESENT | DIGCF_DEVICEINTERFACE);
}
else if (list_index == 2)
{
// Get a list of all modem devices.
// Rationale: https://github.com/pyserial/pyserial/commit/7bb1dcc5aea16ca1c957690cb5276df33af1c286
new_list = SetupDiGetClassDevs(&GUID_DEVINTERFACE_MODEM,
NULL, NULL, DIGCF_PRESENT | DIGCF_DEVICEINTERFACE);
}
else
{
// Could not find the child interface in any list.
// This could be a temporary condition.
libusbp_error * error = error_create("Could not find interface %d.",
interface_number);
error = error_add_code(error, LIBUSBP_ERROR_NOT_READY);
return error;
}
if (new_list == INVALID_HANDLE_VALUE)
{
return error_create_winapi(
"Failed to list devices to find an interface (%u).",
list_index);
}
i = 0;
}
SP_DEVINFO_DATA device_info_data;
device_info_data.cbSize = sizeof(SP_DEVINFO_DATA);
bool success = SetupDiEnumDeviceInfo(new_list, i, &device_info_data);
if (!success)
{
libusbp_error * error;
if (GetLastError() == ERROR_NO_MORE_ITEMS)
{
// Could not find the child interface. This could be
// a temporary condition.
error = error_create("Could not find interface %d.",
interface_number);
error = error_add_code(error, LIBUSBP_ERROR_NOT_READY);
// This list is done. Try the next list.
SetupDiDestroyDeviceInfoList(new_list);
new_list = INVALID_HANDLE_VALUE;
list_index++;
continue;
}
else
{
error = error_create_winapi(
"Failed to get device info while finding an interface.");
libusbp_error * error = error_create_winapi(
"Failed to get device info to find an interface.");
SetupDiDestroyDeviceInfoList(new_list);
return error;
}
SetupDiDestroyDeviceInfoList(new_list);
return error;
}
DEVINST parent_dev_inst;
@@ -162,6 +199,7 @@ static libusbp_error * get_interface_composite(
if (parent_dev_inst != dev_inst)
{
// This device is not a child of our device.
i++;
continue;
}
@@ -179,9 +217,21 @@ static libusbp_error * get_interface_composite(
unsigned int actual_interface_number;
int result = sscanf(device_id, "USB\\VID_%*4x&PID_%*4x&MI_%2x\\",
&actual_interface_number);
if (result != 1 || actual_interface_number != interface_number)
if (result != 1)
{
result = sscanf(device_id, "FTDIBUS\\%*[^\\]\\%x",
&actual_interface_number);
if (result != 1)
{
// Could not figure out the interface number.
i++;
continue;
}
}
if (actual_interface_number != interface_number)
{
// This is not the right interface.
i++;
continue;
}

View File

@@ -124,6 +124,26 @@ libusbp_error * libusbp_serial_port_create(
libusbp_string_free(usb_device_id);
libusbp_serial_port_free(new_sp);
// FTDI devices like the FT232RL aren't actually composite but they look
// like it on Windows because the serial port device is a child of the USB
// device. On Linux and macOS, those devices can be detected with
// composite=false (or composite=true and interface_number=0).
// This workaround allows that to work on Windows too, and it might make
// this API easier to use for some non-FTDI devices too.
if (error && !composite)
{
libusbp_error * error2 = libusbp_serial_port_create(device, 0, true, port);
if (error2)
{
libusbp_error_free(error2);
}
else
{
libusbp_error_free(error);
return NULL;
}
}
return error;
}

View File

@@ -1,9 +1,8 @@
INCLUDE (CheckIncludeFileCXX)
# If catch.hpp is not present, we want to simply skip compiling the tests. This
# allows someone to compile and install libusbp without having catch installed.
# The header can either be installed in this directory or in a standard system
# location.
# If catch.hpp is not present, we want to simply skip compiling the tests.
# Download catch.hpp and put it in this directory:
# https://raw.githubusercontent.com/catchorg/Catch2/v2.x/single_include/catch2/catch.hpp
set (CMAKE_REQUIRED_INCLUDES "${CMAKE_CURRENT_SOURCE_DIR}")
CHECK_INCLUDE_FILE_CXX (catch.hpp HAVE_CATCH_FRAMEWORK)
if (NOT HAVE_CATCH_FRAMEWORK)
@@ -11,8 +10,6 @@ if (NOT HAVE_CATCH_FRAMEWORK)
return ()
endif ()
use_cxx11 ()
set(USE_TEST_DEVICE_A FALSE CACHE BOOL
"Run tests that require Test Device A.")

View File

@@ -486,6 +486,9 @@ TEST_CASE("async_in_pipe for an interrupt endpoint")
// ms, a three-packet transfer will quickly receive those two packets
// and then keep waiting for more.
// Previous versions of macOS returned one packet instead of two,
// but this is no longer true on Darwin Kernel 22.1.0 (Oct 2022).
// Pause the ADC for 100 ms.
handle.control_transfer(0x40, 0xA0, 100, 0);
@@ -508,9 +511,6 @@ TEST_CASE("async_in_pipe for an interrupt endpoint")
#if defined(VBOX_LINUX_ON_WINDOWS)
CHECK(transferred == 0);
#elif defined(__APPLE__)
CHECK(transferred == transfer_size);
CHECK(buffer[4] == 0xAB);
#else
CHECK(transferred == transfer_size * 2);
CHECK(buffer[4] == 0xAB);
@@ -662,9 +662,10 @@ TEST_CASE("async_in_pipe for an interrupt endpoint")
expected_message = "Asynchronous IN transfer failed. "
"Incorrect function. Windows error code 0x1.";
#elif defined(__linux__)
// This request results in an error in Linux but it is only detected
// after some data is transferred.
expected_transferred = transfer_size + 1;
// This request results in an error in Linux after some data is transferred.
// On some older Linux systems, expected_transferred was transfer_size + 1.
// With Linux 5.15 on a Raspberry Pi 4, expected_transferred is transfer_size.
expected_transferred = transfer_size;
expected_message = "Asynchronous IN transfer failed. "
"The transfer overflowed. Error code 75.";
#elif defined(__APPLE__)

View File

Binary file not shown.

View File

@@ -93,6 +93,22 @@ You're now looking at the _top_ of the board.
row of header sockets allowing you to plug the board directly onto the floppy
disk drive; for simplicity I'm leaving that as an exercise for the reader.)
### If you want to use a PCB
Alternatively, you can make an actual PCB!
<div style="text-align: center">
<a href="pcb.png"><img src="pcb.png" style="width:80%" alt="the PCB schematic"></a>
</div>
This is a passive breakout board designed to take a PSoC5 development board, a
standard 34-way PC connector, and a 50-way 8" drive connector. It was
contributed by a user --- thanks!
<a href="FluxEngine_eagle_pcb.zip">Download this to get it</a>. This package
contains the layout in Eagle format, a printable PDF of the PCB layout, and
gerbers suitable for sending off for manufacture.
### Grounding
You _also_ need to solder a wire between a handy GND pin on the board and
@@ -185,10 +201,14 @@ generic libusb stuff and should build and run on Windows, Linux and OSX as
well, although on Windows it'll need MSYS2 and mingw32. You'll need to
install some support packages.
- For Linux (this is Ubuntu, but this should apply to Debian too):
- For Linux with Ubuntu/Debian:
`libusb-1.0-0-dev`, `libsqlite3-dev`, `zlib1g-dev`,
`libudev-dev`, `protobuf-compiler`, `libwxgtk3.0-gtk3-dev`,
`libfmt-dev`.
- For Linux with Fedora/Red Hat:
`git`, `make`, `gcc`, `gcc-c++`, `xxd`, `protobuf-compiler`,
`protobuf-devel`, `fmt-devel`, `systemd-devel`, `wxGTK3-devel`,
`libsqlite3x-devel`
- For OSX with Homebrew: `libusb`, `pkg-config`, `sqlite`,
`protobuf`, `truncate`, `wxwidgets`, `fmt`.
- For Windows with MSYS2: `make`, `mingw-w64-i686-libusb`,

15
doc/disk-40track_drive.md Normal file
View File

@@ -0,0 +1,15 @@
40track_drive
====
## Adjust configuration for a 40-track drive
<!-- This file is automatically generated. Do not edit. -->
This is an extension profile; adding this to the command line will configure
FluxEngine to read from 40-track, 48tpi 5.25" drives. You have to tell it because there is
no way to detect this automatically.
For example:
```
fluxengine read ibm --180 40track_drive
```

View File

@@ -1,23 +1,16 @@
Disk: Acorn ADFS
================
acornadfs
====
## BBC Micro, Archimedes
<!-- This file is automatically generated. Do not edit. -->
Acorn ADFS disks are pretty standard MFM encoded IBM scheme disks, although
with different sector sizes and with the 0-based sector identifiers rather
than 1-based sector identifiers. The index hole is ignored and sectors are
written whereever, requiring FluxEngine to do two revolutions to read a
disk.
Acorn ADFS disks are used by the 6502-based BBC Micro and ARM-based Archimedes
series of computers. They are yet another variation on MFM encoded IBM scheme
disks, although with different sector sizes and with the 0-based sector
identifiers rather than 1-based sector identifiers. The index hole is ignored
and sectors are written whereever, requiring FluxEngine to do two revolutions
to read a disk.
There are various different kinds, which should all work out of the box.
Tested ones are:
- ADFS L: 80 track, 16 sector, 2 sides, 256 bytes per sector == 640kB.
- ADFE D/E: 80 track, 5 sector, 2 sides, 1024 bytes per sector == 800kB.
- ADFS F: 80 track, 10 sector, 2 sides, 1024 bytes per sector == 1600kB.
I expect the others to work, but haven't tried them; [get in
touch](https://github.com/davidgiven/fluxengine/issues/new) if you have any
news. For ADFS S (single-sided 40 track) you'll want `--heads 0 --cylinders
0-79x2`. For ADFS M (single-sided 80 track) you'll want `--heads 0`.
Be aware that Acorn logical block numbering goes all the way up side 0 and
then all the way up side 1. However, FluxEngine uses traditional disk images
@@ -25,15 +18,22 @@ with alternating sides, with the blocks from track 0 side 0 then track 0 side
1 then track 1 side 0 etc. Most Acorn emulators will use both formats, but
they might require nudging as the side order can't be reliably autodetected.
Reading discs
-------------
## Options
Just do:
- Format variants:
- `160`: 160kB 3.5" or 5.25" 40-track SSDD; S format
- `320`: 320kB 3.5" or 5.25" 80-track SSDD; M format
- `640`: 640kB 3.5" or 5.25" 80-track DSDD; L format
- `800`: 800kB 3.5" 80-track DSDD; D and E formats
- `1600`: 1600kB 3.5" 80-track DSHD; F formats
```
fluxengine read acornadfs
```
## Examples
To read:
- `fluxengine read acornadfs --160 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --320 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --640 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --800 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --1600 -s drive:0 -o acornadfs.img`
You should end up with an `acornadfs.img` of the appropriate size for your disk
format. This is an alias for `fluxengine read ibm` with preconfigured
parameters.

View File

@@ -1,34 +1,38 @@
Disk: Acorn DFS
===============
acorndfs
====
## Acorn Atom, BBC Micro series
<!-- This file is automatically generated. Do not edit. -->
Acorn DFS disks are pretty standard FM encoded IBM scheme disks, with
256-sectors and 0-based sector identifiers. There's nothing particularly
special here.
Acorn DFS disks are used by the Acorn Atom and BBC Micro series of computers.
They are pretty standard FM encoded IBM scheme disks, with 256-sectors and
0-based sector identifiers. There's nothing particularly special here.
DFS disks are all single-sided, but allow the other side of the disk to be
used as another drive. FluxEngine supports these; read one side at a time
with `--heads 0` or `--heads 1`.
used as another volume.
DFS comes in two varieties, 40 track and 80 track. These should both work. For
40 track you'll want `--cylinders 0-79x2`. Some rare disks are both at the same
time. FluxEngine can read these but it requires a bit of fiddling as they have
the same tracks on twice.
They come in two varieties, 40 track and 80 track. These should both work.
Some rare disks are both at the same time. FluxEngine can read these but it
requires a bit of fiddling as they have the same tracks on twice.
Reading discs
-------------
## Options
Just do:
- Format variants:
- `100`: 100kB 40-track SSSD
- `200`: 200kB 80-track SSSD
```
fluxengine read acorndfs
```
## Examples
You should end up with an `acorndfs.img` of the appropriate size for your disk
format. This is an alias for `fluxengine read ibm` with preconfigured
parameters.
To read:
References
----------
- `fluxengine read acorndfs --100 -s drive:0 -o acorndfs.img`
- `fluxengine read acorndfs --200 -s drive:0 -o acorndfs.img`
- [The Acord DFS disc format](https://beebwiki.mdfs.net/Acorn_DFS_disc_format)
To write:
- `fluxengine write acorndfs --100 -d drive:0 -i acorndfs.img`
- `fluxengine write acorndfs --200 -d drive:0 -i acorndfs.img`
## References
- [The Acorn DFS disc format](https://beebwiki.mdfs.net/Acorn_DFS_disc_format)

View File

@@ -1,11 +1,13 @@
Disk: AES Lanier word processor
===============================
aeslanier
====
## 616kB 5.25" 77-track SSDD hard sectored
<!-- This file is automatically generated. Do not edit. -->
Back in 1980 Lanier released a series of very early integrated word processor
appliances, the No Problem. These were actually [rebranded AES Data Superplus
machines](http://vintagecomputers.site90.net/aes/). They were gigantic,
weighed 40kg, and one example I've found cost £13,000 in 1981 (the equivalent
of nearly £50,000 in 2018!).
weighed 40kg, and one example I've found cost ꆲ5bb4
of nearly ㇢1a6e
8080 machines with 32kB of RAM, they ran their own proprietary word
processing software off twin 5.25" drive units, but apparently other software
@@ -27,18 +29,20 @@ disk image, and I've had to make a lot of guesses as to the sector format
based on what looks right. If anyone knows _anything_ about these disks,
[please get in touch](https://github.com/davidgiven/fluxengine/issues/new).
Reading discs
-------------
## Options
Just do:
(no options)
```
fluxengine read aeslanier
```
## Examples
You'll end up with an `aeslanier.img` file.
To read:
Useful references
-----------------
- `fluxengine read aeslanier -s drive:0 -o aeslanier.img`
## References
* [SA800 Diskette Storage Drive - Theory Of
Operations](http://www.hartetechnologies.com/manuals/Shugart/50664-1_SA800_TheorOp_May78.pdf):
talks about MMFM a lot, but the Lanier machines didn't use this disk
format.
* [SA800 Diskette Storage Drive - Theory Of Operations](http://www.hartetechnologies.com/manuals/Shugart/50664-1_SA800_TheorOp_May78.pdf): talks about MMFM a lot, but the Lanier machines didn't use this disk format.

View File

@@ -1,34 +1,36 @@
Disk: Agat
==========
agat
====
## 840kB 5.25" 80-track DS
<!-- This file is automatically generated. Do not edit. -->
The Agat (Russian: Агат) was a series Soviet-era computer, first released about
The Agat (Russian: ↊fd74
1983. These were based around a 6502 and were nominally Apple II-compatible
although with enough differences to be problematic.
They could use either standard Apple II 140kB disks, or a proprietary 840kb
MFM-based double-sided format. FluxEngine supports both of these; this profile
is for the proprietary format. for the Apple II format, use the [Apple II
profile](disk-apple2.md).
is for the proprietary format. for the Apple II format, use the `apple2`
profile.
## Options
(no options)
Reading discs
-------------
## Examples
Just do:
To read:
```
fluxengine read agat840
```
- `fluxengine read agat -s drive:0 -o agat.img`
You should end up with an `agat840.img` which is 860160 bytes long.
To write:
- `fluxengine write agat -d drive:0 -i agat.img`
Useful references
-----------------
## References
- [Magazine article on the
Agat](https://sudonull.com/post/54185-Is-AGAT-a-bad-copy-of-Apple)
Agat](https://sudonull.com/post/54185-Is-AGAT-a-bad-copy-of-Apple)
- [Forum thread with (some) documentation on the
format](https://torlus.com/floppy/forum/viewtopic.php?t=1385)
format](https://torlus.com/floppy/forum/viewtopic.php?t=1385)

Some files were not shown because too many files have changed in this diff Show More