Compare commits

...

107 Commits

Author SHA1 Message Date
David Given
1b859015ae Update documentation. 2023-08-02 13:42:23 +02:00
David Given
294ac87503 Update documentation for the Roland D20 format. 2023-07-31 23:36:45 +02:00
David Given
c297adb0c7 Try to fix Mac builds. 2023-07-31 22:30:52 +02:00
David Given
446b965794 Handle Roland extents properly if the directory entries are in the wrong order.
Deal with block numbers >39 (they go in the bottom of the disk).
2023-07-31 22:20:08 +02:00
David Given
1927cc7fe1 Fix issue where trying to rename files by clicking on the tree wasn't working. 2023-07-27 23:44:33 +02:00
David Given
4eca254daf Add support for renaming files. 2023-07-27 23:44:04 +02:00
David Given
c7d4fee3f6 Add support for deleting files. 2023-07-27 23:19:50 +02:00
David Given
a6f798ae5b Mangle and demangle filenames. Remember to write the correct extent numbers in
multiextent files.
2023-07-27 23:09:57 +02:00
David Given
c9ae836e52 Add very brittle write support. 2023-07-27 22:49:10 +02:00
David Given
e3ffa63f7f Make sure that the rotational speed is measured even if reads are done through
Browse Disk.
2023-07-27 22:14:48 +02:00
David Given
4ffc2cc1dc Add support for, hopefully, multi-extent files. 2023-07-27 00:30:44 +02:00
David Given
7f9ba14687 Correct erroneous index. 2023-07-26 22:37:56 +02:00
David Given
a24933e272 Merge from master. 2023-07-26 22:33:40 +02:00
David Given
20bdacbecf Add initial support for the Roland-D20 filesystem. 2023-07-26 22:31:20 +02:00
David Given
1f5903a9a0 Don't use std::filesystem; it makes life harder on Windows with its wide
strings.
2023-07-25 23:35:01 +02:00
David Given
bb073b6bb3 Apparently Mingw can't automatically convert from path to string. 2023-07-25 23:23:04 +02:00
David Given
516241f8f5 Replace the image read file picker with a simple one. 2023-07-25 23:11:52 +02:00
David Given
977b6831a0 When reading Kryoflux streams, you can specify any file in the directory and it
will work (as the GUI now forces you to do this).
2023-07-25 22:48:17 +02:00
David Given
c61effb54f Add a file type box to the flux source selection page. 2023-07-25 22:27:09 +02:00
David Given
346d989944 When reading Kryoflux streams, allow the user to specify any file within the
directory and have it work (as that's what the GUI does now).
2023-07-25 22:51:34 +02:00
David Given
60a73c8d1e Add a file type box to the flux source selection page. 2023-07-25 22:27:09 +02:00
dg
e52db4a837 Typo fix. 2023-07-24 20:56:37 +00:00
dg
4e317643bc Try and install compatible versions of protobuf. 2023-07-24 20:53:51 +00:00
David Given
5f520bf375 Merge pull request #698 from davidgiven/zilogmcz
Add support for the ZDOS filesystem for the Zilog MCZ.
2023-07-24 22:16:33 +02:00
David Given
2efe521b3a Update documentation. 2023-07-24 21:48:37 +02:00
David Given
5c21103646 Get the ZDOS filesystem driver working. 2023-07-24 21:46:49 +02:00
David Given
9444696f37 Merge pull request #697 from davidgiven/ro
Allow read-only flux and image in the GUI.
2023-07-24 08:20:39 +02:00
David Given
082fe4e787 Hack in boilerplate for a ZDos filesystem. 2023-07-24 08:18:18 +02:00
David Given
5e13cf23f9 Allow read-only image reader/writers in the GUI. 2023-07-24 07:53:47 +02:00
David Given
8f98a1f557 Consolidate the image constructors in the same way that's been done for the
flux constructors.
2023-07-24 07:50:16 +02:00
David Given
5b21e8798b Allow read-only flux sources in the GUI. 2023-07-24 07:39:59 +02:00
David Given
b9ef5b7db8 Rename all the flux and image types to prefix the enums, due to them being in
the global namespace now.
2023-07-24 02:18:53 +02:00
David Given
9867f8c302 Combine enums for flux source/sink types. config.cc now knows whether they're
read-only, write-only, and read-write.
2023-07-24 00:50:54 +02:00
David Given
315889faf6 Warning fix. 2023-07-23 22:49:23 +02:00
David Given
798e8fee89 Merge pull request #692 from davidgiven/protobuf
Rename the `requires` config field to `prerequisite`
2023-07-08 00:43:15 +02:00
dg
e1c49db329 Use brew --prefix to detect the installation path when copying licenses from
packages.
2023-07-07 22:10:52 +00:00
dg
dae9537472 Warning fixes. 2023-07-07 21:51:24 +00:00
dg
1330d56cdd Fix a bunch of errors caused by changes to libfmt. 2023-07-07 21:32:21 +00:00
David Given
6ce3ce20d0 Remove stray debugging code. 2023-07-07 01:03:31 +02:00
David Given
362c5ee9b0 Rename the requires config field to prerequisite, as requires is about to
become a C++ keyword.
2023-07-07 00:34:03 +02:00
David Given
0f34ce0278 Merge pull request #690 from Deledrius/nsi-fix
Fix incorrect product name in installer.
2023-06-26 14:27:39 +02:00
Joseph Davies
0c27c7c4c8 Fix incorrect product name in installer. 2023-06-25 16:18:03 -07:00
David Given
9db6efe7a2 Merge pull request #686 from davidgiven/docs
Update documentation.
2023-06-03 00:30:34 +02:00
David Given
8b8a22d7fb Add the PCB schematic. 2023-06-03 00:05:51 +02:00
David Given
0a70344bc1 Add Fedora package list. 2023-06-02 23:38:09 +02:00
David Given
e77d01911c Merge pull request #683 from davidgiven/gw
Reset the Greaseweazle data stream when connecting
2023-05-25 22:43:49 +02:00
David Given
d4c0853e1f Reset the Greaseweazle data stream when connecting. 2023-05-25 22:23:28 +02:00
David Given
363a4e959c Finally fix that format error when measuring disk speed. 2023-05-25 22:23:17 +02:00
David Given
9336a04681 Merge pull request #682 from davidgiven/docs
More documentation tweaking.
2023-05-25 22:10:10 +02:00
David Given
214ff38749 Tweak documentation layout. 2023-05-25 22:08:28 +02:00
David Given
a8f3c01d8b Add basic documentation for the extension formats. 2023-05-25 22:06:23 +02:00
David Given
4da6585ef9 Merge pull request #681 from davidgiven/bb679
Allow writing to Greaseweazle disks again by not setting hardSectorThresholdNs to inf.
2023-05-25 21:58:59 +02:00
David Given
df40100feb Merge pull request #680 from davidgiven/docs
Overhaul docs.
2023-05-25 21:40:32 +02:00
David Given
f2d92e93fb Format. 2023-05-25 21:27:49 +02:00
David Given
b4d8d569d2 Allow writing to Greaseweazle disks again by not setting hardSectorThresholdNs
to inf...
2023-05-25 21:26:44 +02:00
David Given
854b3e9c59 Better autogenerated documentation. 2023-05-25 21:14:41 +02:00
David Given
28ca2b72f1 Polishing. 2023-05-25 21:14:32 +02:00
David Given
7781c8179f Typo fix. 2023-05-25 20:20:02 +02:00
David Given
69ece3ffa0 Polish documentation. 2023-05-25 20:07:33 +02:00
David Given
53adcd92ed Spell (and capitalise) Greaseweazle correctly. 2023-05-25 19:50:05 +02:00
David Given
2bef6ca646 Merge pull request #678 from davidgiven/requirements
Overhaul config system and lots of other stuff
2023-05-16 01:29:58 +02:00
dg
bab350d771 Update Ubuntu build version. 2023-05-15 23:09:52 +00:00
dg
048dac223f Enable workflow cancelling when a new one is pushed. 2023-05-15 22:59:59 +00:00
dg
b7634da310 Work around Apple dev kit stupidity (definiting BYTE_SIZE in a standard
header...)
2023-05-15 22:51:16 +00:00
dg
882c92c64a Merge. 2023-05-15 22:49:52 +00:00
dg
4592dbe28b Add drive types for the Micropolis drives. 2023-05-15 22:49:15 +00:00
dg
edc0f21ae7 Remove all the requires TPI constraints --- I'm not sure this is a good idea. 2023-05-15 22:48:33 +00:00
dg
8715b7f6c1 Don't crash if no format is selected. 2023-05-15 22:14:06 +00:00
dg
99511910dd If an incoming FL2 file has no TPI, use the default rather than 0 (the default
will probably be zero, but anyway).
2023-05-15 22:00:03 +00:00
dg
a03478b011 Don't store the actual DriveProto in FL2 files, because it makes the proto tags
significant.
2023-05-15 21:59:24 +00:00
dg
5c428e1f07 Don't require the user to specify the drive TPI if they don't want to. 2023-05-15 21:51:05 +00:00
dg
ee57615735 Deal with invalid options in the GUI. 2023-05-15 20:55:33 +00:00
dg
67300e5769 Add the ability to validate the configuration, at least in the CLI; this may
require some refactoring for the GUI to apply cleanly.
2023-05-14 23:18:48 +00:00
dg
873e05051c Massive rework of the config system to be clearer, more robust, and more
flexible. (But it doesn't check options any more.)
2023-05-14 22:04:51 +00:00
dg
4daaec46a7 Greying out of the option buttons now works; but the whole way configs are
handled is pretty unsatisfactory and needs work.
2023-05-13 23:29:34 +00:00
dg
dd8cc7bfd4 Attempt to move the configuration setup logic into Config, so it's centralised. 2023-05-13 12:42:31 +00:00
dg
5568ac382f Eliminate Environment --- we don't use it and Config contains this
functionality.
2023-05-13 00:04:42 +00:00
dg
dcdb3e4455 Encoders and decoders are routed through Config. 2023-05-12 23:58:44 +00:00
dg
17b29b1626 Flux sinks and image writers are routed through Config. 2023-05-12 23:47:09 +00:00
dg
dcfcc6271c Sort out a whole bunch of other things, including cleaning up the way the
verification source is handled.
2023-05-12 23:28:25 +00:00
dg
1d77ba6429 ImageReaders can now contribute config. 2023-05-12 22:20:13 +00:00
dg
ff5f019ac1 Fetching the image reader is now done through Config. 2023-05-12 21:52:53 +00:00
dg
e61eeb8c6f Fetching the flux source is now done through Config. 2023-05-12 21:25:54 +00:00
dg
68d22e7f54 Fix build error. 2023-05-11 23:31:38 +00:00
dg
790f0a42e3 Move setting the image writer into Config. 2023-05-11 23:06:24 +00:00
dg
08e9e508cc Move setting the image reader into Config. 2023-05-11 23:02:05 +00:00
dg
ad1a8d608f Migrate setting the flux sink to Config. 2023-05-11 22:54:32 +00:00
dg
d74ed71023 Move setting the flux source into Config. 2023-05-11 22:47:00 +00:00
dg
0c7f9e0888 Enforce option requirements --- but the config stuff is still kinda broken and
will need rethinking, especially if flux files can carry configs with them.
2023-05-11 21:58:10 +00:00
dg
ba5f6528a8 Move option handling into Config. 2023-05-11 20:37:54 +00:00
dg
65cf552ec2 Some cleanup. 2023-05-11 20:03:25 +00:00
dg
715c0a0c42 Move config file loading into config.cc. 2023-05-11 19:58:16 +00:00
dg
9e383575d1 Any drive settings in the global config will override loaded settings from an
fl2 file.
2023-05-11 19:21:59 +00:00
dg
d84c366480 You can now fetch config fields by path. 2023-05-11 19:03:36 +00:00
dg
42e6c11081 Migrate to a new global config object. 2023-05-10 23:13:33 +00:00
dg
9ba3f90f1e Change the global config variable to a globalConfig() function. 2023-05-10 22:07:17 +00:00
dg
24ff51274b Fix formatting. 2023-05-10 21:14:30 +00:00
dg
4c4c752827 Add missing file. 2023-05-10 21:11:10 +00:00
dg
5022b67e4a Drive information is stored in FL2 files. 2023-05-10 20:47:55 +00:00
dg
6b990a9f51 Overhaul the TPI stuff; now both the drive and the layout have a TPI setting,
which must be set.
2023-05-10 19:58:44 +00:00
dg
e69ce3b8df Merge. 2023-05-10 18:31:42 +00:00
dg
cf537b6222 Add the proto part of option requirements. 2023-05-10 18:29:46 +00:00
David Given
9d1160faff Merge pull request #677 from davidgiven/errors
Clean up error handling.
2023-05-10 01:13:49 +02:00
noreply@github.com
ed4067f194 Merge pull request #677 from davidgiven/errors
Clean up error handling.
2023-05-09 23:13:49 +00:00
dg
d4b55cd8f5 Switch from Logger() to log(). 2023-05-09 22:47:36 +00:00
dg
baaeb0bca7 Fix mangled formatting caused by clang-format. 2023-05-09 21:39:35 +00:00
dg
466c3c34e5 Replace the Error() object with an error() function which takes fmt
formatspecs, making for much cleaner code. Reformatted everything.

This actually happened in multiple steps but then I corrupted my local
repository and I had to recover from the working tree.
2023-05-09 20:59:44 +00:00
344 changed files with 10076 additions and 7060 deletions

View File

@@ -2,9 +2,13 @@ name: C/C++ CI
on: [push]
concurrency:
group: environment-${{ github.head_ref }}
cancel-in-progress: true
jobs:
build-linux:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@@ -39,7 +43,7 @@ jobs:
uses: actions/upload-artifact@v2
with:
name: ${{ github.event.repository.name }}.${{ github.sha }}
path: fluxengine.FluxEngine.pkg
path: fluxengine/FluxEngine.pkg
build-windows:
runs-on: windows-latest
@@ -65,6 +69,9 @@ jobs:
mingw-w64-i686-nsis
zip
vim
- name: update-protobuf
run: |
pacman -U --noconfirm https://repo.msys2.org/mingw/mingw32/mingw-w64-i686-protobuf-21.9-1-any.pkg.tar.zst
- uses: actions/checkout@v2
with:
repository: 'davidgiven/fluxengine'

View File

@@ -1,5 +1,9 @@
name: Autorelease
concurrency:
group: environment-${{ github.head_ref }}
cancel-in-progress: true
on:
push:
branches:
@@ -32,6 +36,10 @@ jobs:
vim
- uses: actions/checkout@v3
- name: update-protobuf
run: |
pacman -U --noconfirm https://repo.msys2.org/mingw/mingw32/mingw-w64-i686-protobuf-21.9-1-any.pkg.tar.zst
- name: build
run: |
make -j2

118
Makefile
View File

@@ -1,4 +1,4 @@
# Special Windows settings.
#Special Windows settings.
ifeq ($(OS), Windows_NT)
MINGWBIN = /mingw32/bin
@@ -16,12 +16,12 @@ ifeq ($(OS), Windows_NT)
-Wno-deprecated-enum-float-conversion \
-Wno-deprecated-enum-enum-conversion
# Required to get the gcc run-time libraries on the path.
#Required to get the gcc run - time libraries on the path.
export PATH := $(PATH):$(MINGWBIN)
EXT ?= .exe
endif
# Special OSX settings.
#Special OSX settings.
ifeq ($(shell uname),Darwin)
PLATFORM = OSX
@@ -30,14 +30,14 @@ ifeq ($(shell uname),Darwin)
-framework Foundation
endif
# Check the Make version.
#Check the Make version.
ifeq ($(findstring 4.,$(MAKE_VERSION)),)
$(error You need GNU Make 4.x for this (if you're on OSX, use gmake).)
endif
# Normal settings.
#Normal settings.
OBJDIR ?= .obj
CCPREFIX ?=
@@ -65,6 +65,7 @@ CFLAGS += \
-I$(OBJDIR)/arch \
-I$(OBJDIR)/lib \
-I$(OBJDIR) \
-Wno-deprecated-declarations \
LDFLAGS += \
-lz \
@@ -142,7 +143,7 @@ $(PROTO_SRCS): | $(PROTO_HDRS)
$(PROTO_OBJS): CFLAGS += $(PROTO_CFLAGS)
PROTO_LIB = $(OBJDIR)/libproto.a
$(PROTO_LIB): $(PROTO_OBJS)
PROTO_LDFLAGS = $(shell $(PKG_CONFIG) --libs protobuf) -pthread $(PROTO_LIB)
PROTO_LDFLAGS = $(shell $(PKG_CONFIG) --libs protobuf) -pthread
.PRECIOUS: $(PROTO_HDRS) $(PROTO_SRCS)
include dep/agg/build.mk
@@ -164,57 +165,58 @@ include tests/build.mk
do-encodedecodetest = $(eval $(do-encodedecodetest-impl))
define do-encodedecodetest-impl
tests: $(OBJDIR)/$1$3.flux.encodedecode
$(OBJDIR)/$1$3.flux.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
tests: $(OBJDIR)/$1$$(subst $$(space),_,$3).flux.encodedecode
$(OBJDIR)/$1$$(subst $$(space),_,$3).flux.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
@mkdir -p $(dir $$@)
@echo ENCODEDECODETEST $1 flux $(FLUXENGINE_BIN) $2 $3
@scripts/encodedecodetest.sh $1 flux $(FLUXENGINE_BIN) $2 $3 > $$@
tests: $(OBJDIR)/$1$3.scp.encodedecode
$(OBJDIR)/$1$3.scp.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
tests: $(OBJDIR)/$1$$(subst $$(space),_,$3).scp.encodedecode
$(OBJDIR)/$1$$(subst $$(space),_,$3).scp.encodedecode: scripts/encodedecodetest.sh $(FLUXENGINE_BIN) $2
@mkdir -p $(dir $$@)
@echo ENCODEDECODETEST $1 scp $(FLUXENGINE_BIN) $2 $3
@scripts/encodedecodetest.sh $1 scp $(FLUXENGINE_BIN) $2 $3 > $$@
endef
$(call do-encodedecodetest,agat)
$(call do-encodedecodetest,amiga)
$(call do-encodedecodetest,apple2,,--140)
$(call do-encodedecodetest,atarist,,--360)
$(call do-encodedecodetest,atarist,,--370)
$(call do-encodedecodetest,atarist,,--400)
$(call do-encodedecodetest,atarist,,--410)
$(call do-encodedecodetest,atarist,,--720)
$(call do-encodedecodetest,atarist,,--740)
$(call do-encodedecodetest,atarist,,--800)
$(call do-encodedecodetest,atarist,,--820)
$(call do-encodedecodetest,agat,,--drive.tpi=96)
$(call do-encodedecodetest,amiga,,--drive.tpi=135)
$(call do-encodedecodetest,apple2,,--140 --drive.tpi=96)
$(call do-encodedecodetest,atarist,,--360 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--370 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--400 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--410 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--720 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--740 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--800 --drive.tpi=135)
$(call do-encodedecodetest,atarist,,--820 --drive.tpi=135)
$(call do-encodedecodetest,bk)
$(call do-encodedecodetest,brother,,--120)
$(call do-encodedecodetest,brother,,--240)
$(call do-encodedecodetest,commodore,scripts/commodore1541_test.textpb,--171)
$(call do-encodedecodetest,commodore,scripts/commodore1541_test.textpb,--192)
$(call do-encodedecodetest,commodore,,--800)
$(call do-encodedecodetest,commodore,,--1620)
$(call do-encodedecodetest,hplif,,--264)
$(call do-encodedecodetest,hplif,,--616)
$(call do-encodedecodetest,hplif,,--770)
$(call do-encodedecodetest,ibm,,--1200)
$(call do-encodedecodetest,ibm,,--1232)
$(call do-encodedecodetest,ibm,,--1440)
$(call do-encodedecodetest,ibm,,--1680)
$(call do-encodedecodetest,ibm,,--180)
$(call do-encodedecodetest,ibm,,--160)
$(call do-encodedecodetest,ibm,,--320)
$(call do-encodedecodetest,ibm,,--360)
$(call do-encodedecodetest,ibm,,--720)
$(call do-encodedecodetest,mac,scripts/mac400_test.textpb,--400)
$(call do-encodedecodetest,mac,scripts/mac800_test.textpb,--800)
$(call do-encodedecodetest,n88basic)
$(call do-encodedecodetest,rx50)
$(call do-encodedecodetest,tids990)
$(call do-encodedecodetest,victor9k,,--612)
$(call do-encodedecodetest,victor9k,,--1224)
$(call do-encodedecodetest,brother,,--120 --drive.tpi=135)
$(call do-encodedecodetest,brother,,--240 --drive.tpi=135)
$(call do-encodedecodetest,commodore,scripts/commodore1541_test.textpb,--171 --drive.tpi=96)
$(call do-encodedecodetest,commodore,scripts/commodore1541_test.textpb,--192 --drive.tpi=96)
$(call do-encodedecodetest,commodore,,--800 --drive.tpi=135)
$(call do-encodedecodetest,commodore,,--1620 --drive.tpi=135)
$(call do-encodedecodetest,hplif,,--264 --drive.tpi=135)
$(call do-encodedecodetest,hplif,,--616 --drive.tpi=135)
$(call do-encodedecodetest,hplif,,--770 --drive.tpi=135)
$(call do-encodedecodetest,ibm,,--1200 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--1232 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--1440 --drive.tpi=135)
$(call do-encodedecodetest,ibm,,--1680 --drive.tpi=135)
$(call do-encodedecodetest,ibm,,--180 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--160 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--320 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--360 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--720_96 --drive.tpi=96)
$(call do-encodedecodetest,ibm,,--720_135 --drive.tpi=135)
$(call do-encodedecodetest,mac,scripts/mac400_test.textpb,--400 --drive.tpi=135)
$(call do-encodedecodetest,mac,scripts/mac800_test.textpb,--800 --drive.tpi=135)
$(call do-encodedecodetest,n88basic,,--drive.tpi=96)
$(call do-encodedecodetest,rx50,,--drive.tpi=96)
$(call do-encodedecodetest,tids990,,--drive.tpi=48)
$(call do-encodedecodetest,victor9k,,--612 --drive.tpi=96)
$(call do-encodedecodetest,victor9k,,--1224 --drive.tpi=96)
do-corpustest = $(eval $(do-corpustest-impl))
define do-corpustest-impl
@@ -231,20 +233,20 @@ endef
ifneq ($(wildcard ../fluxengine-testdata/data),)
$(call do-corpustest,amiga.flux,amiga.adf,amiga)
$(call do-corpustest,atarist360.flux,atarist360.st,atarist --360)
$(call do-corpustest,atarist720.flux,atarist720.st,atarist --720)
$(call do-corpustest,brother120.flux,brother120.img,brother --120)
$(call do-corpustest,cmd-fd2000.flux,cmd-fd2000.img,commodore --1620)
$(call do-corpustest,ibm1232.flux,ibm1232.img,ibm --1232)
$(call do-corpustest,ibm1440.flux,ibm1440.img,ibm --1440)
$(call do-corpustest,mac800.flux,mac800.dsk,mac --800)
$(call do-corpustest,micropolis315.flux,micropolis315.img,micropolis --315)
$(call do-corpustest,amiga.flux,amiga.adf,amiga --drive.tpi=135)
$(call do-corpustest,atarist360.flux,atarist360.st,atarist --360 --drive.tpi=135)
$(call do-corpustest,atarist720.flux,atarist720.st,atarist --720 --drive.tpi=135)
$(call do-corpustest,brother120.flux,brother120.img,brother --120 --drive.tpi=135)
$(call do-corpustest,cmd-fd2000.flux,cmd-fd2000.img,commodore --1620 --drive.tpi=135)
$(call do-corpustest,ibm1232.flux,ibm1232.img,ibm --1232 --drive.tpi=96)
$(call do-corpustest,ibm1440.flux,ibm1440.img,ibm --1440 --drive.tpi=135)
$(call do-corpustest,mac800.flux,mac800.dsk,mac --800 --drive.tpi=135)
$(call do-corpustest,micropolis315.flux,micropolis315.img,micropolis --315 --drive.tpi=100)
$(call do-corpustest,northstar87-synthetic.flux,northstar87-synthetic.nsi,northstar --87 --drive.tpi=48)
$(call do-corpustest,northstar175-synthetic.flux,northstar175-synthetic.nsi,northstar --175 --drive.tpi=48)
$(call do-corpustest,northstar350-synthetic.flux,northstar350-synthetic.nsi,northstar --350 --drive.tpi=48)
$(call do-corpustest,victor9k_ss.flux,victor9k_ss.img,victor9k --612)
$(call do-corpustest,victor9k_ds.flux,victor9k_ds.img,victor9k --1224)
$(call do-corpustest,victor9k_ss.flux,victor9k_ss.img,victor9k --612 --drive.tpi=96)
$(call do-corpustest,victor9k_ds.flux,victor9k_ds.img,victor9k --1224 --drive.tpi=96)
endif
@@ -256,7 +258,7 @@ $(OBJDIR)/%.a:
%.exe:
@mkdir -p $(dir $@)
@echo LINK $@
@$(CXX) -o $@ $^ $(LDFLAGS) $(LDFLAGS)
@$(CXX) -o $@ $(filter %.o,$^) $(filter %.a,$^) $(LDFLAGS) $(filter %.a,$^) $(LDFLAGS)
$(OBJDIR)/%.o: %.cpp
@mkdir -p $(dir $@)

View File

@@ -35,11 +35,11 @@ Don't believe me? Watch the demo reel!
</div>
**New!** The FluxEngine client software now works with
[GreaseWeazle](https://github.com/keirf/Greaseweazle/wiki) hardware. So, if you
[Greaseweazle](https://github.com/keirf/Greaseweazle/wiki) hardware. So, if you
can't find a PSoC5 development kit, or don't want to use the Cypress Windows
tools for programming it, you can use one of these instead. Very nearly all
FluxEngine features are available with the GreaseWeazle and it works out-of-the
box. See the [dedicated GreaseWeazle documentation page](doc/greaseweazle.md)
FluxEngine features are available with the Greaseweazle and it works out-of-the
box. See the [dedicated Greaseweazle documentation page](doc/greaseweazle.md)
for more information.
Where?
@@ -65,7 +65,7 @@ following friendly articles:
- [Using a FluxEngine](doc/using.md) ∾ what to do with your new hardware ∾
flux files and image files ∾ knowing what you're doing
- [Using GreaseWeazle hardware with the FluxEngine client
- [Using Greaseweazle hardware with the FluxEngine client
software](doc/greaseweazle.md) ∾ what works ∾ what doesn't work ∾ where to
go for help
@@ -129,16 +129,16 @@ choices because they can store multiple types of file system.
| [`mac`](doc/disk-mac.md) | Macintosh: 400kB/800kB 3.5" GCR | 🦄 | 🦄 | MACHFS |
| [`micropolis`](doc/disk-micropolis.md) | Micropolis: 100tpi MetaFloppy disks | 🦄 | 🦄 | |
| [`mx`](doc/disk-mx.md) | DVK MX: Soviet-era PDP-11 clone | 🦖 | | |
| [`n88basic`](doc/disk-n88basic.md) | N88-BASIC: PC8800/PC98 5.25"/3.5" 77-track 26-sector DSHD | 🦄 | 🦄 | |
| [`n88basic`](doc/disk-n88basic.md) | N88-BASIC: PC8800/PC98 5.25" 77-track 26-sector DSHD | 🦄 | 🦄 | |
| [`northstar`](doc/disk-northstar.md) | Northstar: 5.25" hard sectored | 🦄 | 🦄 | |
| [`psos`](doc/disk-psos.md) | pSOS: 800kB DSDD with PHILE | 🦄 | 🦄 | PHILE |
| [`rolandd20`](doc/disk-rolandd20.md) | Roland D20: 3.5" electronic synthesiser disks | 🦖 | | |
| [`rolandd20`](doc/disk-rolandd20.md) | Roland D20: 3.5" electronic synthesiser disks | 🦄 | 🦖 | ROLAND |
| [`rx50`](doc/disk-rx50.md) | Digital RX50: 400kB 5.25" 80-track 10-sector SSDD | 🦖 | 🦖 | |
| [`smaky6`](doc/disk-smaky6.md) | Smaky 6: 308kB 5.25" 77-track 16-sector SSDD, hard sectored | 🦖 | | SMAKY6 |
| [`tids990`](doc/disk-tids990.md) | Texas Instruments DS990: 1126kB 8" DSSD | 🦖 | 🦖 | |
| [`tiki`](doc/disk-tiki.md) | Tiki 100: CP/M | | | CPMFS |
| [`victor9k`](doc/disk-victor9k.md) | Victor 9000 / Sirius One: 1224kB 5.25" DSDD GCR | 🦖 | 🦖 | |
| [`zilogmcz`](doc/disk-zilogmcz.md) | Zilog MCZ: 320kB 8" 77-track SSSD hard-sectored | 🦖 | | |
| [`zilogmcz`](doc/disk-zilogmcz.md) | Zilog MCZ: 320kB 8" 77-track SSSD hard-sectored | 🦖 | | ZDOS |
{: .datatable }
<!-- FORMATSEND -->
@@ -260,5 +260,3 @@ __Important:__ Because of all these exceptions, if you distribute the
FluxEngine package as a whole, you must comply with the terms of _all_ of the
licensing terms. This means that __effectively the FluxEngine package is
distributable under the terms of the GPL 2.0__.

View File

@@ -2,9 +2,10 @@
#define AESLANIER_H
#define AESLANIER_RECORD_SEPARATOR 0x55555122
#define AESLANIER_SECTOR_LENGTH 256
#define AESLANIER_RECORD_SIZE (AESLANIER_SECTOR_LENGTH + 5)
#define AESLANIER_SECTOR_LENGTH 256
#define AESLANIER_RECORD_SIZE (AESLANIER_SECTOR_LENGTH + 5)
extern std::unique_ptr<Decoder> createAesLanierDecoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createAesLanierDecoder(
const DecoderProto& config);
#endif

View File

@@ -11,56 +11,54 @@
static const FluxPattern SECTOR_PATTERN(32, AESLANIER_RECORD_SEPARATOR);
/* This is actually M2FM, rather than MFM, but it our MFM/FM decoder copes fine with it. */
/* This is actually M2FM, rather than MFM, but it our MFM/FM decoder copes fine
* with it. */
class AesLanierDecoder : public Decoder
{
public:
AesLanierDecoder(const DecoderProto& config):
Decoder(config)
{}
AesLanierDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_PATTERN);
}
{
return seekToPattern(SECTOR_PATTERN);
}
void decodeSectorRecord() override
{
/* Skip ID mark (we know it's a AESLANIER_RECORD_SEPARATOR). */
{
/* Skip ID mark (we know it's a AESLANIER_RECORD_SEPARATOR). */
readRawBits(16);
readRawBits(16);
const auto& rawbits = readRawBits(AESLANIER_RECORD_SIZE*16);
const auto& bytes = decodeFmMfm(rawbits).slice(0, AESLANIER_RECORD_SIZE);
const auto& reversed = bytes.reverseBits();
const auto& rawbits = readRawBits(AESLANIER_RECORD_SIZE * 16);
const auto& bytes =
decodeFmMfm(rawbits).slice(0, AESLANIER_RECORD_SIZE);
const auto& reversed = bytes.reverseBits();
_sector->logicalTrack = reversed[1];
_sector->logicalSide = 0;
_sector->logicalSector = reversed[2];
_sector->logicalTrack = reversed[1];
_sector->logicalSide = 0;
_sector->logicalSector = reversed[2];
/* Check header 'checksum' (which seems far too simple to mean much). */
/* Check header 'checksum' (which seems far too simple to mean much). */
{
uint8_t wanted = reversed[3];
uint8_t got = reversed[1] + reversed[2];
if (wanted != got)
return;
}
{
uint8_t wanted = reversed[3];
uint8_t got = reversed[1] + reversed[2];
if (wanted != got)
return;
}
/* Check data checksum, which also includes the header and is
* significantly better. */
/* Check data checksum, which also includes the header and is
* significantly better. */
_sector->data = reversed.slice(1, AESLANIER_SECTOR_LENGTH);
uint16_t wanted = reversed.reader().seek(0x101).read_le16();
uint16_t got = crc16ref(MODBUS_POLY_REF, _sector->data);
_sector->status = (wanted == got) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = reversed.slice(1, AESLANIER_SECTOR_LENGTH);
uint16_t wanted = reversed.reader().seek(0x101).read_le16();
uint16_t got = crc16ref(MODBUS_POLY_REF, _sector->data);
_sector->status = (wanted == got) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createAesLanierDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new AesLanierDecoder(config));
return std::unique_ptr<Decoder>(new AesLanierDecoder(config));
}

View File

@@ -8,15 +8,13 @@ uint8_t agatChecksum(const Bytes& bytes)
{
uint16_t checksum = 0;
for (uint8_t b : bytes)
{
if (checksum > 0xff)
checksum = (checksum + 1) & 0xff;
for (uint8_t b : bytes)
{
if (checksum > 0xff)
checksum = (checksum + 1) & 0xff;
checksum += b;
}
checksum += b;
}
return checksum & 0xff;
return checksum & 0xff;
}

View File

@@ -17,4 +17,3 @@ extern std::unique_ptr<Encoder> createAgatEncoder(const EncoderProto& config);
extern uint8_t agatChecksum(const Bytes& bytes);
#endif

View File

@@ -9,13 +9,14 @@
#include "fmt/format.h"
#include <string.h>
// clang-format off
/*
* data: X X X X X X X X X - - X - X - X - X X - X - X - = 0xff956a
* flux: 01 01 01 01 01 01 01 01 01 00 10 01 00 01 00 01 00 01 01 00 01 00 01 00 = 0x555549111444
*
* data: X X X X X X X X - X X - X - X - X - - X - X - X = 0xff6a95
* flux: 01 01 01 01 01 01 01 01 00 01 01 00 01 00 01 00 01 00 10 01 00 01 00 01 = 0x555514444911
*
*
* Each pattern is prefixed with this one:
*
* data: - - - X - - X - = 0x12
@@ -30,65 +31,59 @@
* 0100010010010010 = MFM encoded
* 1000100100100100 = with trailing zero
* - - - X - - X - = effective bitstream = 0x12
*
*/
// clang-format on
static const FluxPattern SECTOR_PATTERN(64, SECTOR_ID);
static const FluxPattern DATA_PATTERN(64, DATA_ID);
static const FluxMatchers ALL_PATTERNS = {
&SECTOR_PATTERN,
&DATA_PATTERN
};
static const FluxMatchers ALL_PATTERNS = {&SECTOR_PATTERN, &DATA_PATTERN};
class AgatDecoder : public Decoder
{
public:
AgatDecoder(const DecoderProto& config):
Decoder(config)
{}
AgatDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ALL_PATTERNS);
}
{
return seekToPattern(ALL_PATTERNS);
}
void decodeSectorRecord() override
{
if (readRaw64() != SECTOR_ID)
return;
{
if (readRaw64() != SECTOR_ID)
return;
auto bytes = decodeFmMfm(readRawBits(64)).slice(0, 4);
if (bytes[3] != 0x5a)
return;
auto bytes = decodeFmMfm(readRawBits(64)).slice(0, 4);
if (bytes[3] != 0x5a)
return;
_sector->logicalTrack = bytes[1] >> 1;
_sector->logicalSector = bytes[2];
_sector->logicalSide = bytes[1] & 1;
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
_sector->logicalTrack = bytes[1] >> 1;
_sector->logicalSector = bytes[2];
_sector->logicalSide = bytes[1] & 1;
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
if (readRaw64() != DATA_ID)
return;
void decodeDataRecord() override
{
if (readRaw64() != DATA_ID)
return;
Bytes bytes = decodeFmMfm(readRawBits((AGAT_SECTOR_SIZE+2)*16)).slice(0, AGAT_SECTOR_SIZE+2);
Bytes bytes = decodeFmMfm(readRawBits((AGAT_SECTOR_SIZE + 2) * 16))
.slice(0, AGAT_SECTOR_SIZE + 2);
if (bytes[AGAT_SECTOR_SIZE+1] != 0x5a)
return;
if (bytes[AGAT_SECTOR_SIZE + 1] != 0x5a)
return;
_sector->data = bytes.slice(0, AGAT_SECTOR_SIZE);
uint8_t wantChecksum = bytes[AGAT_SECTOR_SIZE];
uint8_t gotChecksum = agatChecksum(_sector->data);
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = bytes.slice(0, AGAT_SECTOR_SIZE);
uint8_t wantChecksum = bytes[AGAT_SECTOR_SIZE];
uint8_t gotChecksum = agatChecksum(_sector->data);
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createAgatDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new AgatDecoder(config));
return std::unique_ptr<Decoder>(new AgatDecoder(config));
}

View File

@@ -95,7 +95,7 @@ public:
}
if (_cursor >= _bits.size())
Error() << "track data overrun";
error("track data overrun");
fillBitmapTo(_bits, _cursor, _bits.size(), {true, false});
auto fluxmap = std::make_unique<Fluxmap>();

View File

@@ -18,61 +18,61 @@ uint32_t amigaChecksum(const Bytes& bytes)
static uint8_t everyother(uint16_t x)
{
/* aabb ccdd eeff gghh */
x &= 0x6666; /* 0ab0 0cd0 0ef0 0gh0 */
x >>= 1; /* 00ab 00cd 00ef 00gh */
x |= x << 2; /* abab cdcd efef ghgh */
x &= 0x3c3c; /* 00ab cd00 00ef gh00 */
x >>= 2; /* 0000 abcd 0000 efgh */
x |= x >> 4; /* 0000 abcd abcd efgh */
return x;
/* aabb ccdd eeff gghh */
x &= 0x6666; /* 0ab0 0cd0 0ef0 0gh0 */
x >>= 1; /* 00ab 00cd 00ef 00gh */
x |= x << 2; /* abab cdcd efef ghgh */
x &= 0x3c3c; /* 00ab cd00 00ef gh00 */
x >>= 2; /* 0000 abcd 0000 efgh */
x |= x >> 4; /* 0000 abcd abcd efgh */
return x;
}
Bytes amigaInterleave(const Bytes& input)
{
Bytes output;
ByteWriter bw(output);
Bytes output;
ByteWriter bw(output);
/* Write all odd bits. (Numbering starts at 0...) */
/* Write all odd bits. (Numbering starts at 0...) */
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0xaaaa; /* a0b0 c0d0 e0f0 g0h0 */
x |= x >> 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0xaaaa; /* a0b0 c0d0 e0f0 g0h0 */
x |= x >> 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
/* Write all even bits. */
/* Write all even bits. */
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0x5555; /* 0a0b 0c0d 0e0f 0g0h */
x |= x << 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
{
ByteReader br(input);
while (!br.eof())
{
uint16_t x = br.read_be16();
x &= 0x5555; /* 0a0b 0c0d 0e0f 0g0h */
x |= x << 1; /* aabb ccdd eeff gghh */
x = everyother(x); /* 0000 0000 abcd efgh */
bw.write_8(x);
}
}
return output;
return output;
}
Bytes amigaDeinterleave(const uint8_t*& input, size_t len)
{
assert(!(len & 1));
const uint8_t* odds = &input[0];
const uint8_t* evens = &input[len/2];
const uint8_t* evens = &input[len / 2];
Bytes output;
ByteWriter bw(output);
for (size_t i=0; i<len/2; i++)
for (size_t i = 0; i < len / 2; i++)
{
uint8_t o = *odds++;
uint8_t e = *evens++;
@@ -81,11 +81,15 @@ Bytes amigaDeinterleave(const uint8_t*& input, size_t len)
* http://graphics.stanford.edu/~seander/bithacks.html#InterleaveBMN
*/
uint16_t result =
(((e * 0x0101010101010101ULL & 0x8040201008040201ULL)
* 0x0102040810204081ULL >> 49) & 0x5555) |
(((o * 0x0101010101010101ULL & 0x8040201008040201ULL)
* 0x0102040810204081ULL >> 48) & 0xAAAA);
(((e * 0x0101010101010101ULL & 0x8040201008040201ULL) *
0x0102040810204081ULL >>
49) &
0x5555) |
(((o * 0x0101010101010101ULL & 0x8040201008040201ULL) *
0x0102040810204081ULL >>
48) &
0xAAAA);
bw.write_be16(result);
}
@@ -95,6 +99,6 @@ Bytes amigaDeinterleave(const uint8_t*& input, size_t len)
Bytes amigaDeinterleave(const Bytes& input)
{
const uint8_t* ptr = input.cbegin();
return amigaDeinterleave(ptr, input.size());
const uint8_t* ptr = input.cbegin();
return amigaDeinterleave(ptr, input.size());
}

View File

@@ -11,70 +11,74 @@
#include <string.h>
#include <algorithm>
/*
/*
* Amiga disks use MFM but it's not quite the same as IBM MFM. They only use
* a single type of record with a different marker byte.
*
*
* See the big comment in the IBM MFM decoder for the gruesome details of how
* MFM works.
*/
static const FluxPattern SECTOR_PATTERN(48, AMIGA_SECTOR_RECORD);
class AmigaDecoder : public Decoder
{
public:
AmigaDecoder(const DecoderProto& config):
Decoder(config),
_config(config.amiga())
{}
AmigaDecoder(const DecoderProto& config):
Decoder(config),
_config(config.amiga())
{
}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_PATTERN);
}
{
return seekToPattern(SECTOR_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw48() != AMIGA_SECTOR_RECORD)
return;
const auto& rawbits = readRawBits(AMIGA_RECORD_SIZE*16);
if (rawbits.size() < (AMIGA_RECORD_SIZE*16))
return;
const auto& rawbytes = toBytes(rawbits).slice(0, AMIGA_RECORD_SIZE*2);
const auto& bytes = decodeFmMfm(rawbits).slice(0, AMIGA_RECORD_SIZE);
{
if (readRaw48() != AMIGA_SECTOR_RECORD)
return;
const uint8_t* ptr = bytes.begin();
const auto& rawbits = readRawBits(AMIGA_RECORD_SIZE * 16);
if (rawbits.size() < (AMIGA_RECORD_SIZE * 16))
return;
const auto& rawbytes = toBytes(rawbits).slice(0, AMIGA_RECORD_SIZE * 2);
const auto& bytes = decodeFmMfm(rawbits).slice(0, AMIGA_RECORD_SIZE);
Bytes header = amigaDeinterleave(ptr, 4);
Bytes recoveryinfo = amigaDeinterleave(ptr, 16);
const uint8_t* ptr = bytes.begin();
_sector->logicalTrack = header[1] >> 1;
_sector->logicalSide = header[1] & 1;
_sector->logicalSector = header[2];
Bytes header = amigaDeinterleave(ptr, 4);
Bytes recoveryinfo = amigaDeinterleave(ptr, 16);
uint32_t wantedheaderchecksum = amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotheaderchecksum = amigaChecksum(rawbytes.slice(0, 40));
if (gotheaderchecksum != wantedheaderchecksum)
return;
_sector->logicalTrack = header[1] >> 1;
_sector->logicalSide = header[1] & 1;
_sector->logicalSector = header[2];
uint32_t wanteddatachecksum = amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotdatachecksum = amigaChecksum(rawbytes.slice(56, 1024));
uint32_t wantedheaderchecksum =
amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotheaderchecksum = amigaChecksum(rawbytes.slice(0, 40));
if (gotheaderchecksum != wantedheaderchecksum)
return;
Bytes data;
data.writer().append(amigaDeinterleave(ptr, 512)).append(recoveryinfo);
_sector->data = data;
_sector->status = (gotdatachecksum == wanteddatachecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
uint32_t wanteddatachecksum =
amigaDeinterleave(ptr, 4).reader().read_be32();
uint32_t gotdatachecksum = amigaChecksum(rawbytes.slice(56, 1024));
Bytes data;
data.writer().append(amigaDeinterleave(ptr, 512)).append(recoveryinfo);
_sector->data = data;
_sector->status = (gotdatachecksum == wanteddatachecksum)
? Sector::OK
: Sector::BAD_CHECKSUM;
}
private:
const AmigaDecoderProto& _config;
nanoseconds_t _clock;
const AmigaDecoderProto& _config;
nanoseconds_t _clock;
};
std::unique_ptr<Decoder> createAmigaDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new AmigaDecoder(config));
return std::unique_ptr<Decoder>(new AmigaDecoder(config));
}

View File

@@ -59,7 +59,7 @@ static void write_sector(std::vector<bool>& bits,
const std::shared_ptr<const Sector>& sector)
{
if ((sector->data.size() != 512) && (sector->data.size() != 528))
Error() << "unsupported sector size --- you must pick 512 or 528";
error("unsupported sector size --- you must pick 512 or 528");
uint32_t checksum = 0;
@@ -114,7 +114,8 @@ public:
const std::vector<std::shared_ptr<const Sector>>& sectors,
const Image& image) override
{
/* Number of bits for one nominal revolution of a real 200ms Amiga disk. */
/* Number of bits for one nominal revolution of a real 200ms Amiga disk.
*/
int bitsPerRevolution = 200e3 / _config.clock_rate_us();
std::vector<bool> bits(bitsPerRevolution);
unsigned cursor = 0;
@@ -129,13 +130,12 @@ public:
write_sector(bits, cursor, sector);
if (cursor >= bits.size())
Error() << "track data overrun";
error("track data overrun");
fillBitmapTo(bits, cursor, bits.size(), {true, false});
auto fluxmap = std::make_unique<Fluxmap>();
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(
_config.clock_rate_us() * 1e3, 200e6));
calculatePhysicalClockPeriod(_config.clock_rate_us() * 1e3, 200e6));
return fluxmap;
}

View File

@@ -5,16 +5,15 @@
#include "decoders/decoders.h"
#include "encoders/encoders.h"
#define APPLE2_SECTOR_RECORD 0xd5aa96
#define APPLE2_DATA_RECORD 0xd5aaad
#define APPLE2_SECTOR_RECORD 0xd5aa96
#define APPLE2_DATA_RECORD 0xd5aaad
#define APPLE2_SECTOR_LENGTH 256
#define APPLE2_SECTOR_LENGTH 256
#define APPLE2_ENCODED_SECTOR_LENGTH 342
#define APPLE2_SECTORS 16
#define APPLE2_SECTORS 16
extern std::unique_ptr<Decoder> createApple2Decoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createApple2Encoder(const EncoderProto& config);
#endif

View File

@@ -50,8 +50,7 @@ public:
writeSector(bits, cursor, *sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -118,8 +117,7 @@ private:
// There is data to encode to disk.
if ((sector.data.size() != APPLE2_SECTOR_LENGTH))
Error() << fmt::format(
"unsupported sector size {} --- you must pick 256",
error("unsupported sector size {} --- you must pick 256",
sector.data.size());
// Write address syncing leader : A sequence of "FF40"s; 5 of them

View File

@@ -3,17 +3,19 @@
/* Brother word processor format (or at least, one of them) */
#define BROTHER_SECTOR_RECORD 0xFFFFFD57
#define BROTHER_DATA_RECORD 0xFFFFFDDB
#define BROTHER_DATA_RECORD_PAYLOAD 256
#define BROTHER_DATA_RECORD_CHECKSUM 3
#define BROTHER_SECTOR_RECORD 0xFFFFFD57
#define BROTHER_DATA_RECORD 0xFFFFFDDB
#define BROTHER_DATA_RECORD_PAYLOAD 256
#define BROTHER_DATA_RECORD_CHECKSUM 3
#define BROTHER_DATA_RECORD_ENCODED_SIZE 415
#define BROTHER_TRACKS_PER_240KB_DISK 78
#define BROTHER_TRACKS_PER_120KB_DISK 39
#define BROTHER_SECTORS_PER_TRACK 12
#define BROTHER_TRACKS_PER_240KB_DISK 78
#define BROTHER_TRACKS_PER_120KB_DISK 39
#define BROTHER_SECTORS_PER_TRACK 12
extern std::unique_ptr<Decoder> createBrotherDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createBrotherEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createBrotherDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createBrotherEncoder(
const EncoderProto& config);
#endif

View File

@@ -1,13 +1,13 @@
GCR_ENTRY(0x55, 0) // 00000
GCR_ENTRY(0x57, 1) // 00001
GCR_ENTRY(0x5b, 2) // 00010
GCR_ENTRY(0x5d, 3) // 00011
GCR_ENTRY(0x5f, 4) // 00100
GCR_ENTRY(0x6b, 5) // 00101
GCR_ENTRY(0x6d, 6) // 00110
GCR_ENTRY(0x6f, 7) // 00111
GCR_ENTRY(0x75, 8) // 01000
GCR_ENTRY(0x77, 9) // 01001
GCR_ENTRY(0x55, 0) // 00000
GCR_ENTRY(0x57, 1) // 00001
GCR_ENTRY(0x5b, 2) // 00010
GCR_ENTRY(0x5d, 3) // 00011
GCR_ENTRY(0x5f, 4) // 00100
GCR_ENTRY(0x6b, 5) // 00101
GCR_ENTRY(0x6d, 6) // 00110
GCR_ENTRY(0x6f, 7) // 00111
GCR_ENTRY(0x75, 8) // 01000
GCR_ENTRY(0x77, 9) // 01001
GCR_ENTRY(0x7b, 10) // 01010
GCR_ENTRY(0x7d, 11) // 01011
GCR_ENTRY(0x7f, 12) // 01100
@@ -30,4 +30,3 @@ GCR_ENTRY(0xef, 28) // 11100
GCR_ENTRY(0xf5, 29) // 11101
GCR_ENTRY(0xf7, 30) // 11110
GCR_ENTRY(0xfb, 31) // 11111

View File

@@ -11,7 +11,8 @@
const FluxPattern SECTOR_RECORD_PATTERN(32, BROTHER_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(32, BROTHER_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static std::vector<uint8_t> outputbuffer;
@@ -32,88 +33,89 @@ static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
static int decode_header_gcr(uint16_t word)
{
switch (word)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "header_gcr.h"
#undef GCR_ENTRY
}
return -1;
switch (word)
{
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "header_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
class BrotherDecoder : public Decoder
{
public:
BrotherDecoder(const DecoderProto& config):
Decoder(config)
{}
BrotherDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw32() != BROTHER_SECTOR_RECORD)
return;
{
if (readRaw32() != BROTHER_SECTOR_RECORD)
return;
const auto& rawbits = readRawBits(32);
const auto& bytes = toBytes(rawbits).slice(0, 4);
const auto& rawbits = readRawBits(32);
const auto& bytes = toBytes(rawbits).slice(0, 4);
ByteReader br(bytes);
_sector->logicalTrack = decode_header_gcr(br.read_be16());
_sector->logicalSector = decode_header_gcr(br.read_be16());
ByteReader br(bytes);
_sector->logicalTrack = decode_header_gcr(br.read_be16());
_sector->logicalSector = decode_header_gcr(br.read_be16());
/* Sanity check the values read; there's no header checksum and
* occasionally we get garbage due to bit errors. */
if (_sector->logicalSector > 11)
return;
if (_sector->logicalTrack > 79)
return;
/* Sanity check the values read; there's no header checksum and
* occasionally we get garbage due to bit errors. */
if (_sector->logicalSector > 11)
return;
if (_sector->logicalTrack > 79)
return;
_sector->status = Sector::DATA_MISSING;
}
_sector->status = Sector::DATA_MISSING;
}
void decodeDataRecord() override
{
if (readRaw32() != BROTHER_DATA_RECORD)
return;
{
if (readRaw32() != BROTHER_DATA_RECORD)
return;
const auto& rawbits = readRawBits(BROTHER_DATA_RECORD_ENCODED_SIZE*8);
const auto& rawbytes = toBytes(rawbits).slice(0, BROTHER_DATA_RECORD_ENCODED_SIZE);
const auto& rawbits = readRawBits(BROTHER_DATA_RECORD_ENCODED_SIZE * 8);
const auto& rawbytes =
toBytes(rawbits).slice(0, BROTHER_DATA_RECORD_ENCODED_SIZE);
Bytes bytes;
ByteWriter bw(bytes);
BitWriter bitw(bw);
for (uint8_t b : rawbytes)
{
uint32_t nibble = decode_data_gcr(b);
bitw.push(nibble, 5);
}
bitw.flush();
Bytes bytes;
ByteWriter bw(bytes);
BitWriter bitw(bw);
for (uint8_t b : rawbytes)
{
uint32_t nibble = decode_data_gcr(b);
bitw.push(nibble, 5);
}
bitw.flush();
_sector->data = bytes.slice(0, BROTHER_DATA_RECORD_PAYLOAD);
uint32_t realCrc = crcbrother(_sector->data);
uint32_t wantCrc = bytes.reader().seek(BROTHER_DATA_RECORD_PAYLOAD).read_be24();
_sector->status = (realCrc == wantCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = bytes.slice(0, BROTHER_DATA_RECORD_PAYLOAD);
uint32_t realCrc = crcbrother(_sector->data);
uint32_t wantCrc =
bytes.reader().seek(BROTHER_DATA_RECORD_PAYLOAD).read_be24();
_sector->status =
(realCrc == wantCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createBrotherDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new BrotherDecoder(config));
return std::unique_ptr<Decoder>(new BrotherDecoder(config));
}

View File

@@ -67,7 +67,7 @@ static void write_sector_data(
int width = 0;
if (data.size() != BROTHER_DATA_RECORD_PAYLOAD)
Error() << "unsupported sector size";
error("unsupported sector size");
auto write_byte = [&](uint8_t byte)
{
@@ -107,8 +107,7 @@ public:
}
public:
std::unique_ptr<Fluxmap> encode(
std::shared_ptr<const TrackInfo>& trackInfo,
std::unique_ptr<Fluxmap> encode(std::shared_ptr<const TrackInfo>& trackInfo,
const std::vector<std::shared_ptr<const Sector>>& sectors,
const Image& image) override
{
@@ -116,8 +115,8 @@ public:
std::vector<bool> bits(bitsPerRevolution);
unsigned cursor = 0;
int sectorCount = 0;
for (const auto& sectorData : sectors)
int sectorCount = 0;
for (const auto& sectorData : sectors)
{
double headerMs = _config.post_index_gap_ms() +
sectorCount * _config.sector_spacing_ms();
@@ -126,16 +125,18 @@ public:
unsigned dataCursor = dataMs * 1e3 / _config.clock_rate_us();
fillBitmapTo(bits, cursor, headerCursor, {true, false});
write_sector_header(
bits, cursor, sectorData->logicalTrack, sectorData->logicalSector);
write_sector_header(bits,
cursor,
sectorData->logicalTrack,
sectorData->logicalSector);
fillBitmapTo(bits, cursor, dataCursor, {true, false});
write_sector_data(bits, cursor, sectorData->data);
sectorCount++;
sectorCount++;
}
if (cursor >= bits.size())
Error() << "track data overrun";
error("track data overrun");
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -147,8 +148,7 @@ private:
const BrotherEncoderProto& _config;
};
std::unique_ptr<Encoder> createBrotherEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createBrotherEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new BrotherEncoder(config));
}

View File

@@ -76,4 +76,3 @@ GCR_ENTRY(0x6BAB, 74)
GCR_ENTRY(0xAD5F, 75)
GCR_ENTRY(0xDBED, 76)
GCR_ENTRY(0x55BB, 77)

View File

@@ -37,8 +37,7 @@ OBJS += $(LIBARCH_OBJS)
$(LIBARCH_SRCS): | $(PROTO_HDRS)
$(LIBARCH_SRCS): CFLAGS += $(PROTO_CFLAGS)
LIBARCH_LIB = $(OBJDIR)/libarch.a
LIBARCH_LDFLAGS =
$(LIBARCH_LIB): $(LIBARCH_OBJS)
LIBARCH_LDFLAGS = $(LIBARCH_LIB)
$(call use-pkgconfig, $(LIBARCH_LIB), $(LIBARCH_OBJS), fmt)

View File

@@ -2,27 +2,27 @@
#include "c64.h"
/*
* Track Sectors/track # Sectors Storage in Bytes Clock rate
* ----- ------------- --------- ---------------- ----------
* 1-17 21 357 7820 3.25
* 18-24 19 133 7170 3.5
* 25-30 18 108 6300 3.75
* 31-40(*) 17 85 6020 4
* ---
* 683 (for a 35 track image)
*
* The clock rate is normalised for a 200ms drive.
*/
* Track Sectors/track # Sectors Storage in Bytes Clock rate
* ----- ------------- --------- ---------------- ----------
* 1-17 21 357 7820 3.25
* 18-24 19 133 7170 3.5
* 25-30 18 108 6300 3.75
* 31-40(*) 17 85 6020 4
* ---
* 683 (for a 35 track image)
*
* The clock rate is normalised for a 200ms drive.
*/
nanoseconds_t clockPeriodForC64Track(unsigned track)
{
constexpr double BYTE_SIZE = 8.0;
constexpr double b = 8.0;
if (track < 17)
return 26.0 / BYTE_SIZE;
return 26.0 / b;
if (track < 24)
return 28.0 / BYTE_SIZE;
return 28.0 / b;
if (track < 30)
return 30.0 / BYTE_SIZE;
return 32.0 / BYTE_SIZE;
return 30.0 / b;
return 32.0 / b;
}

View File

@@ -4,11 +4,11 @@
#include "decoders/decoders.h"
#include "encoders/encoders.h"
#define C64_SECTOR_RECORD 0xffd49
#define C64_DATA_RECORD 0xffd57
#define C64_SECTOR_LENGTH 256
#define C64_SECTOR_RECORD 0xffd49
#define C64_DATA_RECORD 0xffd57
#define C64_SECTOR_LENGTH 256
/* Source: http://www.unusedino.de/ec64/technical/formats/g64.html
/* Source: http://www.unusedino.de/ec64/technical/formats/g64.html
1. Header sync FF FF FF FF FF (40 'on' bits, not GCR)
2. Header info 52 54 B5 29 4B 7A 5E 95 55 55 (10 GCR bytes)
3. Header gap 55 55 55 55 55 55 55 55 55 (9 bytes, never read)
@@ -17,18 +17,20 @@
6. Inter-sector gap 55 55 55 55...55 55 (4 to 12 bytes, never read)
1. Header sync (SYNC for the next sector)
*/
#define C64_HEADER_DATA_SYNC 0xFF
#define C64_HEADER_BLOCK_ID 0x08
#define C64_DATA_BLOCK_ID 0x07
#define C64_HEADER_GAP 0x55
#define C64_INTER_SECTOR_GAP 0x55
#define C64_PADDING 0x0F
#define C64_HEADER_DATA_SYNC 0xFF
#define C64_HEADER_BLOCK_ID 0x08
#define C64_DATA_BLOCK_ID 0x07
#define C64_HEADER_GAP 0x55
#define C64_INTER_SECTOR_GAP 0x55
#define C64_PADDING 0x0F
#define C64_TRACKS_PER_DISK 40
#define C64_BAM_TRACK 17
#define C64_TRACKS_PER_DISK 40
#define C64_BAM_TRACK 17
extern std::unique_ptr<Decoder> createCommodore64Decoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createCommodore64Encoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createCommodore64Decoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createCommodore64Encoder(
const EncoderProto& config);
extern nanoseconds_t clockPeriodForC64Track(unsigned track);

View File

@@ -96,8 +96,7 @@ public:
}
};
std::unique_ptr<Decoder> createCommodore64Decoder(
const DecoderProto& config)
std::unique_ptr<Decoder> createCommodore64Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Commodore64Decoder(config));
}

View File

@@ -51,26 +51,6 @@ static void write_bits(
}
}
void bindump(std::ostream& stream, std::vector<bool>& buffer)
{
size_t pos = 0;
while ((pos < buffer.size()) and (pos < 520))
{
stream << fmt::format("{:5d} : ", pos);
for (int i = 0; i < 40; i++)
{
if ((pos + i) < buffer.size())
stream << fmt::format("{:01b}", (buffer[pos + i]));
else
stream << "-- ";
if ((((pos + i + 1) % 8) == 0) and i != 0)
stream << " ";
}
stream << std::endl;
pos += 40;
}
}
static std::vector<bool> encode_data(uint8_t input)
{
/*
@@ -214,8 +194,7 @@ public:
writeSector(bits, cursor, sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -243,8 +222,7 @@ private:
{
// There is data to encode to disk.
if ((sector->data.size() != C64_SECTOR_LENGTH))
Error() << fmt::format(
"unsupported sector size {} --- you must pick 256",
error("unsupported sector size {} --- you must pick 256",
sector->data.size());
// 1. Write header Sync (not GCR)

View File

@@ -13,16 +13,18 @@
const FluxPattern SECTOR_RECORD_PATTERN(24, F85_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(24, F85_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
@@ -37,11 +39,11 @@ static Bytes decode(const std::vector<bool>& bits)
while (ii != bits.end())
{
uint8_t inputfifo = 0;
for (size_t i=0; i<5; i++)
for (size_t i = 0; i < 5; i++)
{
if (ii == bits.end())
break;
inputfifo = (inputfifo<<1) | *ii++;
inputfifo = (inputfifo << 1) | *ii++;
}
bitw.push(decode_data_gcr(inputfifo), 4);
@@ -54,56 +56,55 @@ static Bytes decode(const std::vector<bool>& bits)
class DurangoF85Decoder : public Decoder
{
public:
DurangoF85Decoder(const DecoderProto& config):
Decoder(config)
{}
DurangoF85Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
/* Skip sync bits and ID byte. */
{
/* Skip sync bits and ID byte. */
if (readRaw24() != F85_SECTOR_RECORD)
return;
if (readRaw24() != F85_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
const auto& bytes = decode(readRawBits(6*10));
const auto& bytes = decode(readRawBits(6 * 10));
_sector->logicalSector = bytes[2];
_sector->logicalSide = 0;
_sector->logicalTrack = bytes[0];
_sector->logicalSector = bytes[2];
_sector->logicalSide = 0;
_sector->logicalTrack = bytes[0];
uint16_t wantChecksum = bytes.reader().seek(4).read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xef21, bytes.slice(0, 4));
if (wantChecksum == gotChecksum)
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
uint16_t wantChecksum = bytes.reader().seek(4).read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xef21, bytes.slice(0, 4));
if (wantChecksum == gotChecksum)
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
/* Skip sync bits ID byte. */
{
/* Skip sync bits ID byte. */
if (readRaw24() != F85_DATA_RECORD)
return;
if (readRaw24() != F85_DATA_RECORD)
return;
const auto& bytes = decode(readRawBits((F85_SECTOR_LENGTH+3)*10))
.slice(0, F85_SECTOR_LENGTH+3);
ByteReader br(bytes);
const auto& bytes = decode(readRawBits((F85_SECTOR_LENGTH + 3) * 10))
.slice(0, F85_SECTOR_LENGTH + 3);
ByteReader br(bytes);
_sector->data = br.read(F85_SECTOR_LENGTH);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xbf84, _sector->data);
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(F85_SECTOR_LENGTH);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(CCITT_POLY, 0xbf84, _sector->data);
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createDurangoF85Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new DurangoF85Decoder(config));
return std::unique_ptr<Decoder>(new DurangoF85Decoder(config));
}

View File

@@ -2,9 +2,10 @@
#define F85_H
#define F85_SECTOR_RECORD 0xffffce /* 1111 1111 1111 1111 1100 1110 */
#define F85_DATA_RECORD 0xffffcb /* 1111 1111 1111 1111 1100 1101 */
#define F85_SECTOR_LENGTH 512
#define F85_DATA_RECORD 0xffffcb /* 1111 1111 1111 1111 1100 1101 */
#define F85_SECTOR_LENGTH 512
extern std::unique_ptr<Decoder> createDurangoF85Decoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createDurangoF85Decoder(
const DecoderProto& config);
#endif

View File

@@ -14,10 +14,10 @@
const FluxPattern SECTOR_ID_PATTERN(16, 0xabaa);
/*
/*
* Reverse engineered from a dump of the floppy drive's ROM. I have no idea how
* it works.
*
*
* LF8BA:
* clra
* staa X00B0
@@ -100,45 +100,43 @@ static uint16_t checksum(const Bytes& bytes)
class Fb100Decoder : public Decoder
{
public:
Fb100Decoder(const DecoderProto& config):
Decoder(config)
{}
Fb100Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_ID_PATTERN);
}
{
return seekToPattern(SECTOR_ID_PATTERN);
}
void decodeSectorRecord() override
{
auto rawbits = readRawBits(FB100_RECORD_SIZE*16);
{
auto rawbits = readRawBits(FB100_RECORD_SIZE * 16);
const Bytes bytes = decodeFmMfm(rawbits).slice(0, FB100_RECORD_SIZE);
ByteReader br(bytes);
br.seek(1);
const Bytes id = br.read(FB100_ID_SIZE);
uint16_t wantIdCrc = br.read_be16();
uint16_t gotIdCrc = checksum(id);
const Bytes payload = br.read(FB100_PAYLOAD_SIZE);
uint16_t wantPayloadCrc = br.read_be16();
uint16_t gotPayloadCrc = checksum(payload);
const Bytes bytes = decodeFmMfm(rawbits).slice(0, FB100_RECORD_SIZE);
ByteReader br(bytes);
br.seek(1);
const Bytes id = br.read(FB100_ID_SIZE);
uint16_t wantIdCrc = br.read_be16();
uint16_t gotIdCrc = checksum(id);
const Bytes payload = br.read(FB100_PAYLOAD_SIZE);
uint16_t wantPayloadCrc = br.read_be16();
uint16_t gotPayloadCrc = checksum(payload);
if (wantIdCrc != gotIdCrc)
return;
if (wantIdCrc != gotIdCrc)
return;
uint8_t abssector = id[2];
_sector->logicalTrack = abssector >> 1;
_sector->logicalSide = 0;
_sector->logicalSector = abssector & 1;
_sector->data.writer().append(id.slice(5, 12)).append(payload);
uint8_t abssector = id[2];
_sector->logicalTrack = abssector >> 1;
_sector->logicalSide = 0;
_sector->logicalSector = abssector & 1;
_sector->data.writer().append(id.slice(5, 12)).append(payload);
_sector->status = (wantPayloadCrc == gotPayloadCrc) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->status = (wantPayloadCrc == gotPayloadCrc)
? Sector::OK
: Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createFb100Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Fb100Decoder(config));
return std::unique_ptr<Decoder>(new Fb100Decoder(config));
}

View File

@@ -8,4 +8,3 @@
extern std::unique_ptr<Decoder> createFb100Decoder(const DecoderProto& config);
#endif

View File

@@ -112,10 +112,11 @@ public:
const Image& image) override
{
IbmEncoderProto::TrackdataProto trackdata;
getEncoderTrackData(trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
getEncoderTrackData(
trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
auto trackLayout =
Layout::getLayoutOfTrack(trackInfo->logicalTrack, trackInfo->logicalSide);
auto trackLayout = Layout::getLayoutOfTrack(
trackInfo->logicalTrack, trackInfo->logicalSide);
auto writeBytes = [&](const Bytes& bytes)
{
@@ -257,7 +258,7 @@ public:
}
if (_cursor >= _bits.size())
Error() << "track data overrun";
error("track data overrun");
while (_cursor < _bits.size())
writeFillerRawBytes(1, gapFill);

View File

@@ -31,9 +31,7 @@ class Decoder;
class DecoderProto;
class EncoderProto;
extern std::unique_ptr<Decoder> createIbmDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createIbmEncoder(
const EncoderProto& config);
extern std::unique_ptr<Decoder> createIbmDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createIbmEncoder(const EncoderProto& config);
#endif

View File

@@ -12,22 +12,25 @@
const FluxPattern SECTOR_RECORD_PATTERN(24, MAC_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(24, MAC_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
/* This is extremely inspired by the MESS implementation, written by Nathan Woods
* and R. Belmont: https://github.com/mamedev/mame/blob/4263a71e64377db11392c458b580c5ae83556bc7/src/lib/formats/ap_dsk35.cpp
/* This is extremely inspired by the MESS implementation, written by Nathan
* Woods and R. Belmont:
* https://github.com/mamedev/mame/blob/4263a71e64377db11392c458b580c5ae83556bc7/src/lib/formats/ap_dsk35.cpp
*/
static Bytes decode_crazy_data(const Bytes& input, Sector::Status& status)
{
@@ -41,7 +44,7 @@ static Bytes decode_crazy_data(const Bytes& input, Sector::Status& status)
uint8_t b2[LOOKUP_LEN + 1];
uint8_t b3[LOOKUP_LEN + 1];
for (int i=0; i<=LOOKUP_LEN; i++)
for (int i = 0; i <= LOOKUP_LEN; i++)
{
uint8_t w4 = br.read_8();
uint8_t w1 = br.read_8();
@@ -125,67 +128,68 @@ uint8_t decode_side(uint8_t side)
class MacintoshDecoder : public Decoder
{
public:
MacintoshDecoder(const DecoderProto& config):
Decoder(config)
{}
MacintoshDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
if (readRaw24() != MAC_SECTOR_RECORD)
return;
{
if (readRaw24() != MAC_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
auto header = toBytes(readRawBits(7*8)).slice(0, 7);
uint8_t encodedTrack = decode_data_gcr(header[0]);
if (encodedTrack != (_sector->physicalTrack & 0x3f))
return;
uint8_t encodedSector = decode_data_gcr(header[1]);
uint8_t encodedSide = decode_data_gcr(header[2]);
uint8_t formatByte = decode_data_gcr(header[3]);
uint8_t wantedsum = decode_data_gcr(header[4]);
auto header = toBytes(readRawBits(7 * 8)).slice(0, 7);
if (encodedSector > 11)
return;
uint8_t encodedTrack = decode_data_gcr(header[0]);
if (encodedTrack != (_sector->physicalTrack & 0x3f))
return;
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = decode_side(encodedSide);
_sector->logicalSector = encodedSector;
uint8_t gotsum = (encodedTrack ^ encodedSector ^ encodedSide ^ formatByte) & 0x3f;
if (wantedsum == gotsum)
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
uint8_t encodedSector = decode_data_gcr(header[1]);
uint8_t encodedSide = decode_data_gcr(header[2]);
uint8_t formatByte = decode_data_gcr(header[3]);
uint8_t wantedsum = decode_data_gcr(header[4]);
if (encodedSector > 11)
return;
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = decode_side(encodedSide);
_sector->logicalSector = encodedSector;
uint8_t gotsum =
(encodedTrack ^ encodedSector ^ encodedSide ^ formatByte) & 0x3f;
if (wantedsum == gotsum)
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
if (readRaw24() != MAC_DATA_RECORD)
return;
{
if (readRaw24() != MAC_DATA_RECORD)
return;
/* Read data. */
/* Read data. */
readRawBits(8); /* skip spare byte */
auto inputbuffer = toBytes(readRawBits(MAC_ENCODED_SECTOR_LENGTH*8))
.slice(0, MAC_ENCODED_SECTOR_LENGTH);
readRawBits(8); /* skip spare byte */
auto inputbuffer = toBytes(readRawBits(MAC_ENCODED_SECTOR_LENGTH * 8))
.slice(0, MAC_ENCODED_SECTOR_LENGTH);
for (unsigned i=0; i<inputbuffer.size(); i++)
inputbuffer[i] = decode_data_gcr(inputbuffer[i]);
_sector->status = Sector::BAD_CHECKSUM;
Bytes userData = decode_crazy_data(inputbuffer, _sector->status);
_sector->data.clear();
_sector->data.writer().append(userData.slice(12, 512)).append(userData.slice(0, 12));
}
for (unsigned i = 0; i < inputbuffer.size(); i++)
inputbuffer[i] = decode_data_gcr(inputbuffer[i]);
_sector->status = Sector::BAD_CHECKSUM;
Bytes userData = decode_crazy_data(inputbuffer, _sector->status);
_sector->data.clear();
_sector->data.writer()
.append(userData.slice(12, 512))
.append(userData.slice(0, 12));
}
};
std::unique_ptr<Decoder> createMacintoshDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new MacintoshDecoder(config));
return std::unique_ptr<Decoder>(new MacintoshDecoder(config));
}

View File

@@ -174,7 +174,7 @@ static void write_sector(std::vector<bool>& bits,
const std::shared_ptr<const Sector>& sector)
{
if ((sector->data.size() != 512) && (sector->data.size() != 524))
Error() << "unsupported sector size --- you must pick 512 or 524";
error("unsupported sector size --- you must pick 512 or 524");
write_bits(bits, cursor, 0xff, 1 * 8); /* pad byte */
for (int i = 0; i < 7; i++)
@@ -239,13 +239,12 @@ public:
write_sector(bits, cursor, sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(clockRateUs * 1e3, 200e6));
fluxmap->appendBits(
bits, calculatePhysicalClockPeriod(clockRateUs * 1e3, 200e6));
return fluxmap;
}
@@ -253,8 +252,7 @@ private:
const MacintoshEncoderProto& _config;
};
std::unique_ptr<Encoder> createMacintoshEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createMacintoshEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new MacintoshEncoder(config));
}

View File

@@ -1,12 +1,12 @@
#ifndef MACINTOSH_H
#define MACINTOSH_H
#define MAC_SECTOR_RECORD 0xd5aa96 /* 1101 0101 1010 1010 1001 0110 */
#define MAC_DATA_RECORD 0xd5aaad /* 1101 0101 1010 1010 1010 1101 */
#define MAC_SECTOR_RECORD 0xd5aa96 /* 1101 0101 1010 1010 1001 0110 */
#define MAC_DATA_RECORD 0xd5aaad /* 1101 0101 1010 1010 1010 1101 */
#define MAC_SECTOR_LENGTH 524 /* yes, really */
#define MAC_SECTOR_LENGTH 524 /* yes, really */
#define MAC_ENCODED_SECTOR_LENGTH 703
#define MAC_FORMAT_BYTE 0x22
#define MAC_FORMAT_BYTE 0x22
#define MAC_TRACKS_PER_DISK 80
@@ -15,8 +15,9 @@ class Decoder;
class DecoderProto;
class EncoderProto;
extern std::unique_ptr<Decoder> createMacintoshDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createMacintoshEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createMacintoshDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createMacintoshEncoder(
const EncoderProto& config);
#endif

View File

@@ -20,17 +20,20 @@ static const FluxPattern SECTOR_SYNC_PATTERN(64, 0xAAAAAAAAAAAA5555LL);
static const FluxPattern SECTOR_ADVANCE_PATTERN(64, 0xAAAAAAAAAAAAAAAALL);
/* Standard Micropolis checksum. Adds all bytes, with carry. */
uint8_t micropolisChecksum(const Bytes& bytes) {
ByteReader br(bytes);
uint16_t sum = 0;
while (!br.eof()) {
if (sum > 0xFF) {
sum -= 0x100 - 1;
}
sum += br.read_8();
}
/* The last carry is ignored */
return sum & 0xFF;
uint8_t micropolisChecksum(const Bytes& bytes)
{
ByteReader br(bytes);
uint16_t sum = 0;
while (!br.eof())
{
if (sum > 0xFF)
{
sum -= 0x100 - 1;
}
sum += br.read_8();
}
/* The last carry is ignored */
return sum & 0xFF;
}
/* Vector MZOS does not use the standard Micropolis checksum.
@@ -41,145 +44,164 @@ uint8_t micropolisChecksum(const Bytes& bytes) {
* Unlike the Micropolis checksum, this does not cover the 12-byte
* header (track, sector, 10 OS-specific bytes.)
*/
uint8_t mzosChecksum(const Bytes& bytes) {
ByteReader br(bytes);
uint8_t checksum = 0;
uint8_t databyte;
uint8_t mzosChecksum(const Bytes& bytes)
{
ByteReader br(bytes);
uint8_t checksum = 0;
uint8_t databyte;
while (!br.eof()) {
databyte = br.read_8();
checksum ^= ((databyte << 1) | (databyte >> 7));
}
while (!br.eof())
{
databyte = br.read_8();
checksum ^= ((databyte << 1) | (databyte >> 7));
}
return checksum;
return checksum;
}
class MicropolisDecoder : public Decoder
{
public:
MicropolisDecoder(const DecoderProto& config):
Decoder(config),
_config(config.micropolis())
{
_checksumType = _config.checksum_type();
}
MicropolisDecoder(const DecoderProto& config):
Decoder(config),
_config(config.micropolis())
{
_checksumType = _config.checksum_type();
}
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0) {
seekToIndexMark();
now = tell().ns();
}
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0)
{
seekToIndexMark();
now = tell().ns();
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 12.5e6)) {
seekToIndexMark();
return 0;
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 12.5e6))
{
seekToIndexMark();
return 0;
}
nanoseconds_t clock = seekToPattern(SECTOR_SYNC_PATTERN);
nanoseconds_t clock = seekToPattern(SECTOR_SYNC_PATTERN);
auto syncDelta = tell().ns() - now;
/* Due to the weak nature of the Micropolis SYNC patern,
* it's possible to detect a false SYNC during the gap
* between the sector pulse and the write gate. If the SYNC
* is detected less than 100uS after the sector pulse, search
* for another valid SYNC.
*
* Reference: Vector Micropolis Disk Controller Board Technical
* Information Manual, pp. 1-16.
*/
if ((syncDelta > 0) && (syncDelta < 100e3)) {
seekToPattern(SECTOR_ADVANCE_PATTERN);
clock = seekToPattern(SECTOR_SYNC_PATTERN);
}
auto syncDelta = tell().ns() - now;
/* Due to the weak nature of the Micropolis SYNC patern,
* it's possible to detect a false SYNC during the gap
* between the sector pulse and the write gate. If the SYNC
* is detected less than 100uS after the sector pulse, search
* for another valid SYNC.
*
* Reference: Vector Micropolis Disk Controller Board Technical
* Information Manual, pp. 1-16.
*/
if ((syncDelta > 0) && (syncDelta < 100e3))
{
seekToPattern(SECTOR_ADVANCE_PATTERN);
clock = seekToPattern(SECTOR_SYNC_PATTERN);
}
_sector->headerStartTime = tell().ns();
_sector->headerStartTime = tell().ns();
/* seekToPattern() can skip past the index hole, if this happens
* too close to the end of the Fluxmap, discard the sector.
*/
if (_sector->headerStartTime > (getFluxmapDuration() - 12.5e6)) {
return 0;
}
/* seekToPattern() can skip past the index hole, if this happens
* too close to the end of the Fluxmap, discard the sector.
*/
if (_sector->headerStartTime > (getFluxmapDuration() - 12.5e6))
{
return 0;
}
return clock;
}
return clock;
}
void decodeSectorRecord() override
{
readRawBits(48);
auto rawbits = readRawBits(MICROPOLIS_ENCODED_SECTOR_SIZE*16);
auto bytes = decodeFmMfm(rawbits).slice(0, MICROPOLIS_ENCODED_SECTOR_SIZE);
ByteReader br(bytes);
void decodeSectorRecord() override
{
readRawBits(48);
auto rawbits = readRawBits(MICROPOLIS_ENCODED_SECTOR_SIZE * 16);
auto bytes =
decodeFmMfm(rawbits).slice(0, MICROPOLIS_ENCODED_SECTOR_SIZE);
ByteReader br(bytes);
int syncByte = br.read_8(); /* sync */
if (syncByte != 0xFF)
return;
int syncByte = br.read_8(); /* sync */
if (syncByte != 0xFF)
return;
_sector->logicalTrack = br.read_8();
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = br.read_8();
if (_sector->logicalSector > 15)
return;
if (_sector->logicalTrack > 76)
return;
if (_sector->logicalTrack != _sector->physicalTrack)
return;
_sector->logicalTrack = br.read_8();
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = br.read_8();
if (_sector->logicalSector > 15)
return;
if (_sector->logicalTrack > 76)
return;
if (_sector->logicalTrack != _sector->physicalTrack)
return;
br.read(10); /* OS data or padding */
auto data = br.read(MICROPOLIS_PAYLOAD_SIZE);
uint8_t wantChecksum = br.read_8();
br.read(10); /* OS data or padding */
auto data = br.read(MICROPOLIS_PAYLOAD_SIZE);
uint8_t wantChecksum = br.read_8();
/* If not specified, automatically determine the checksum type.
* Once the checksum type is determined, it will be used for the
* entire disk.
*/
if (_checksumType == MicropolisDecoderProto::AUTO) {
/* Calculate both standard Micropolis (MDOS, CP/M, OASIS) and MZOS checksums */
if (wantChecksum == micropolisChecksum(bytes.slice(1, 2+266))) {
_checksumType = MicropolisDecoderProto::MICROPOLIS;
} else if (wantChecksum == mzosChecksum(bytes.slice(MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE))) {
_checksumType = MicropolisDecoderProto::MZOS;
std::cout << "Note: MZOS checksum detected." << std::endl;
}
}
/* If not specified, automatically determine the checksum type.
* Once the checksum type is determined, it will be used for the
* entire disk.
*/
if (_checksumType == MicropolisDecoderProto::AUTO)
{
/* Calculate both standard Micropolis (MDOS, CP/M, OASIS) and MZOS
* checksums */
if (wantChecksum == micropolisChecksum(bytes.slice(1, 2 + 266)))
{
_checksumType = MicropolisDecoderProto::MICROPOLIS;
}
else if (wantChecksum ==
mzosChecksum(bytes.slice(
MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE)))
{
_checksumType = MicropolisDecoderProto::MZOS;
std::cout << "Note: MZOS checksum detected." << std::endl;
}
}
uint8_t gotChecksum;
uint8_t gotChecksum;
if (_checksumType == MicropolisDecoderProto::MZOS) {
gotChecksum = mzosChecksum(bytes.slice(MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE));
} else {
gotChecksum = micropolisChecksum(bytes.slice(1, 2+266));
}
if (_checksumType == MicropolisDecoderProto::MZOS)
{
gotChecksum = mzosChecksum(
bytes.slice(MICROPOLIS_HEADER_SIZE, MICROPOLIS_PAYLOAD_SIZE));
}
else
{
gotChecksum = micropolisChecksum(bytes.slice(1, 2 + 266));
}
br.read(5); /* 4 byte ECC and ECC-present flag */
br.read(5); /* 4 byte ECC and ECC-present flag */
if (_config.sector_output_size() == MICROPOLIS_PAYLOAD_SIZE)
_sector->data = data;
else if (_config.sector_output_size() == MICROPOLIS_ENCODED_SECTOR_SIZE)
_sector->data = bytes;
else
Error() << "Sector output size may only be 256 or 275";
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
if (_config.sector_output_size() == MICROPOLIS_PAYLOAD_SIZE)
_sector->data = data;
else if (_config.sector_output_size() == MICROPOLIS_ENCODED_SECTOR_SIZE)
_sector->data = bytes;
else
error("Sector output size may only be 256 or 275");
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
private:
const MicropolisDecoderProto& _config;
MicropolisDecoderProto_ChecksumType _checksumType; /* -1 = auto, 1 = Micropolis, 2=MZOS */
const MicropolisDecoderProto& _config;
MicropolisDecoderProto_ChecksumType
_checksumType; /* -1 = auto, 1 = Micropolis, 2=MZOS */
};
std::unique_ptr<Decoder> createMicropolisDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new MicropolisDecoder(config));
return std::unique_ptr<Decoder>(new MicropolisDecoder(config));
}

View File

@@ -12,7 +12,7 @@ static void write_sector(std::vector<bool>& bits,
{
if ((sector->data.size() != 256) &&
(sector->data.size() != MICROPOLIS_ENCODED_SECTOR_SIZE))
Error() << "unsupported sector size --- you must pick 256 or 275";
error("unsupported sector size --- you must pick 256 or 275");
int fullSectorSize = 40 + MICROPOLIS_ENCODED_SECTOR_SIZE + 40 + 35;
auto fullSector = std::make_shared<std::vector<uint8_t>>();
@@ -24,8 +24,9 @@ static void write_sector(std::vector<bool>& bits,
if (sector->data.size() == MICROPOLIS_ENCODED_SECTOR_SIZE)
{
if (sector->data[0] != 0xFF)
Error() << "275 byte sector doesn't start with sync byte 0xFF. "
"Corrupted sector";
error(
"275 byte sector doesn't start with sync byte 0xFF. "
"Corrupted sector");
uint8_t wantChecksum = sector->data[1 + 2 + 266];
uint8_t gotChecksum =
micropolisChecksum(sector->data.slice(1, 2 + 266));
@@ -57,7 +58,7 @@ static void write_sector(std::vector<bool>& bits,
fullSector->push_back(0);
if (fullSector->size() != fullSectorSize)
Error() << "sector mismatched length";
error("sector mismatched length");
bool lastBit = false;
encodeMfm(bits, cursor, fullSector, lastBit);
/* filler */
@@ -91,12 +92,11 @@ public:
write_sector(bits, cursor, sectorData);
if (cursor != bits.size())
Error() << "track data mismatched length";
error("track data mismatched length");
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
calculatePhysicalClockPeriod(
_config.clock_period_us() * 1e3,
calculatePhysicalClockPeriod(_config.clock_period_us() * 1e3,
_config.rotational_period_ms() * 1e6));
return fluxmap;
}
@@ -105,8 +105,7 @@ private:
const MicropolisEncoderProto& _config;
};
std::unique_ptr<Encoder> createMicropolisEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createMicropolisEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new MicropolisEncoder(config));
}

View File

@@ -1,17 +1,20 @@
#ifndef MICROPOLIS_H
#define MICROPOLIS_H
#define MICROPOLIS_PAYLOAD_SIZE (256)
#define MICROPOLIS_HEADER_SIZE (1+2+10)
#define MICROPOLIS_ENCODED_SECTOR_SIZE (MICROPOLIS_HEADER_SIZE + MICROPOLIS_PAYLOAD_SIZE + 6)
#define MICROPOLIS_PAYLOAD_SIZE (256)
#define MICROPOLIS_HEADER_SIZE (1 + 2 + 10)
#define MICROPOLIS_ENCODED_SECTOR_SIZE \
(MICROPOLIS_HEADER_SIZE + MICROPOLIS_PAYLOAD_SIZE + 6)
class Decoder;
class Encoder;
class EncoderProto;
class DecoderProto;
extern std::unique_ptr<Decoder> createMicropolisDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createMicropolisEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createMicropolisDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createMicropolisEncoder(
const EncoderProto& config);
extern uint8_t micropolisChecksum(const Bytes& bytes);

View File

@@ -26,52 +26,51 @@ const FluxPattern ID_PATTERN(32, 0xaaaaffaf);
class MxDecoder : public Decoder
{
public:
MxDecoder(const DecoderProto& config):
Decoder(config)
{}
MxDecoder(const DecoderProto& config): Decoder(config) {}
void beginTrack() override
{
_clock = _sector->clock = seekToPattern(ID_PATTERN);
_currentSector = 0;
}
{
_clock = _sector->clock = seekToPattern(ID_PATTERN);
_currentSector = 0;
}
nanoseconds_t advanceToNextRecord() override
{
if (_currentSector == 11)
{
/* That was the last sector on the disk. */
return 0;
}
else
return _clock;
}
{
if (_currentSector == 11)
{
/* That was the last sector on the disk. */
return 0;
}
else
return _clock;
}
void decodeSectorRecord() override
{
/* Skip the ID pattern and track word, which is only present on the
* first sector. We don't trust the track word because some driver
* don't write it correctly. */
{
/* Skip the ID pattern and track word, which is only present on the
* first sector. We don't trust the track word because some driver
* don't write it correctly. */
if (_currentSector == 0)
readRawBits(64);
if (_currentSector == 0)
readRawBits(64);
auto bits = readRawBits((SECTOR_SIZE+2)*16);
auto bytes = decodeFmMfm(bits).slice(0, SECTOR_SIZE+2);
auto bits = readRawBits((SECTOR_SIZE + 2) * 16);
auto bytes = decodeFmMfm(bits).slice(0, SECTOR_SIZE + 2);
uint16_t gotChecksum = 0;
ByteReader br(bytes);
for (int i=0; i<(SECTOR_SIZE/2); i++)
gotChecksum += br.read_be16();
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = 0;
ByteReader br(bytes);
for (int i = 0; i < (SECTOR_SIZE / 2); i++)
gotChecksum += br.read_be16();
uint16_t wantChecksum = br.read_be16();
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _currentSector;
_sector->data = bytes.slice(0, SECTOR_SIZE).swab();
_sector->status = (gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
_currentSector++;
}
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _currentSector;
_sector->data = bytes.slice(0, SECTOR_SIZE).swab();
_sector->status =
(gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
_currentSector++;
}
private:
nanoseconds_t _clock;
@@ -80,7 +79,5 @@ private:
std::unique_ptr<Decoder> createMxDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new MxDecoder(config));
return std::unique_ptr<Decoder>(new MxDecoder(config));
}

View File

@@ -22,7 +22,7 @@
#include "fmt/format.h"
#define MFM_ID 0xaaaaaaaaaaaa5545LL
#define FM_ID 0xaaaaaaaaaaaaffefLL
#define FM_ID 0xaaaaaaaaaaaaffefLL
/*
* MFM sectors have 32 bytes of 00's followed by two sync characters,
* specified in the North Star MDS manual as 0xFBFB.
@@ -44,133 +44,143 @@ static const FluxPattern MFM_PATTERN(64, MFM_ID);
*/
static const FluxPattern FM_PATTERN(64, FM_ID);
const FluxMatchers ANY_SECTOR_PATTERN(
{
&MFM_PATTERN,
&FM_PATTERN,
}
);
const FluxMatchers ANY_SECTOR_PATTERN({
&MFM_PATTERN,
&FM_PATTERN,
});
/* Checksum is initially 0.
* For each data byte, XOR with the current checksum.
* Rotate checksum left, carrying bit 7 to bit 0.
*/
uint8_t northstarChecksum(const Bytes& bytes) {
ByteReader br(bytes);
uint8_t checksum = 0;
uint8_t northstarChecksum(const Bytes& bytes)
{
ByteReader br(bytes);
uint8_t checksum = 0;
while (!br.eof()) {
checksum ^= br.read_8();
checksum = ((checksum << 1) | ((checksum >> 7)));
}
while (!br.eof())
{
checksum ^= br.read_8();
checksum = ((checksum << 1) | ((checksum >> 7)));
}
return checksum;
return checksum;
}
class NorthstarDecoder : public Decoder
{
public:
NorthstarDecoder(const DecoderProto& config):
Decoder(config),
_config(config.northstar())
{}
NorthstarDecoder(const DecoderProto& config):
Decoder(config),
_config(config.northstar())
{
}
/* Search for FM or MFM sector record */
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
/* Search for FM or MFM sector record */
nanoseconds_t advanceToNextRecord() override
{
nanoseconds_t now = tell().ns();
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0) {
seekToIndexMark();
now = tell().ns();
}
/* For all but the first sector, seek to the next sector pulse.
* The first sector does not contain the sector pulse in the fluxmap.
*/
if (now != 0)
{
seekToIndexMark();
now = tell().ns();
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 21e6)) {
seekToIndexMark();
return 0;
}
/* Discard a possible partial sector at the end of the track.
* This partial sector could be mistaken for a conflicted sector, if
* whatever data read happens to match the checksum of 0, which is
* rare, but has been observed on some disks.
*/
if (now > (getFluxmapDuration() - 21e6))
{
seekToIndexMark();
return 0;
}
int msSinceIndex = std::round(now / 1e6);
int msSinceIndex = std::round(now / 1e6);
/* Note that the seekToPattern ignores the sector pulses, so if
* a sector is not found for some reason, the seek will advance
* past one or more sector pulses. For this reason, calculate
* _hardSectorId after the sector header is found.
*/
nanoseconds_t clock = seekToPattern(ANY_SECTOR_PATTERN);
_sector->headerStartTime = tell().ns();
/* Note that the seekToPattern ignores the sector pulses, so if
* a sector is not found for some reason, the seek will advance
* past one or more sector pulses. For this reason, calculate
* _hardSectorId after the sector header is found.
*/
nanoseconds_t clock = seekToPattern(ANY_SECTOR_PATTERN);
_sector->headerStartTime = tell().ns();
/* Discard a possible partial sector. */
if (_sector->headerStartTime > (getFluxmapDuration() - 21e6)) {
return 0;
}
/* Discard a possible partial sector. */
if (_sector->headerStartTime > (getFluxmapDuration() - 21e6))
{
return 0;
}
int sectorFoundTimeRaw = std::round(_sector->headerStartTime / 1e6);
int sectorFoundTime;
int sectorFoundTimeRaw = std::round(_sector->headerStartTime / 1e6);
int sectorFoundTime;
/* Round time to the nearest 20ms */
if ((sectorFoundTimeRaw % 20) < 10) {
sectorFoundTime = (sectorFoundTimeRaw / 20) * 20;
}
else {
sectorFoundTime = ((sectorFoundTimeRaw + 20) / 20) * 20;
}
/* Round time to the nearest 20ms */
if ((sectorFoundTimeRaw % 20) < 10)
{
sectorFoundTime = (sectorFoundTimeRaw / 20) * 20;
}
else
{
sectorFoundTime = ((sectorFoundTimeRaw + 20) / 20) * 20;
}
/* Calculate the sector ID based on time since the index */
_hardSectorId = (sectorFoundTime / 20) % 10;
/* Calculate the sector ID based on time since the index */
_hardSectorId = (sectorFoundTime / 20) % 10;
return clock;
}
return clock;
}
void decodeSectorRecord() override
{
uint64_t id = toBytes(readRawBits(64)).reader().read_be64();
unsigned recordSize, payloadSize, headerSize;
void decodeSectorRecord() override
{
uint64_t id = toBytes(readRawBits(64)).reader().read_be64();
unsigned recordSize, payloadSize, headerSize;
if (id == MFM_ID) {
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_DD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_DD;
headerSize = NORTHSTAR_HEADER_SIZE_DD;
}
else {
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_SD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_SD;
headerSize = NORTHSTAR_HEADER_SIZE_SD;
}
if (id == MFM_ID)
{
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_DD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_DD;
headerSize = NORTHSTAR_HEADER_SIZE_DD;
}
else
{
recordSize = NORTHSTAR_ENCODED_SECTOR_SIZE_SD;
payloadSize = NORTHSTAR_PAYLOAD_SIZE_SD;
headerSize = NORTHSTAR_HEADER_SIZE_SD;
}
auto rawbits = readRawBits(recordSize * 16);
auto bytes = decodeFmMfm(rawbits).slice(0, recordSize);
ByteReader br(bytes);
auto rawbits = readRawBits(recordSize * 16);
auto bytes = decodeFmMfm(rawbits).slice(0, recordSize);
ByteReader br(bytes);
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _hardSectorId;
_sector->logicalTrack = _sector->physicalTrack;
_sector->logicalSide = _sector->physicalSide;
_sector->logicalSector = _hardSectorId;
_sector->logicalTrack = _sector->physicalTrack;
if (headerSize == NORTHSTAR_HEADER_SIZE_DD) {
br.read_8(); /* MFM second Sync char, usually 0xFB */
}
if (headerSize == NORTHSTAR_HEADER_SIZE_DD)
{
br.read_8(); /* MFM second Sync char, usually 0xFB */
}
_sector->data = br.read(payloadSize);
uint8_t wantChecksum = br.read_8();
uint8_t gotChecksum = northstarChecksum(bytes.slice(headerSize - 1, payloadSize));
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(payloadSize);
uint8_t wantChecksum = br.read_8();
uint8_t gotChecksum =
northstarChecksum(bytes.slice(headerSize - 1, payloadSize));
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
private:
const NorthstarDecoderProto& _config;
uint8_t _hardSectorId;
const NorthstarDecoderProto& _config;
uint8_t _hardSectorId;
};
std::unique_ptr<Decoder> createNorthstarDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new NorthstarDecoder(config));
return std::unique_ptr<Decoder>(new NorthstarDecoder(config));
}

View File

@@ -49,7 +49,7 @@ static void write_sector(std::vector<bool>& bits,
doubleDensity = true;
break;
default:
Error() << "unsupported sector size --- you must pick 256 or 512";
error("unsupported sector size --- you must pick 256 or 512");
break;
}
@@ -96,9 +96,10 @@ static void write_sector(std::vector<bool>& bits,
fullSector->push_back(GAP2_FILL_BYTE);
if (fullSector->size() != fullSectorSize)
Error() << "sector mismatched length (" << sector->data.size()
<< ") expected: " << fullSector->size() << " got "
<< fullSectorSize;
error("sector mismatched length ({}); expected {}, got {}",
sector->data.size(),
fullSector->size(),
fullSectorSize);
}
else
{
@@ -148,7 +149,7 @@ public:
write_sector(bits, cursor, sectorData);
if (cursor > bits.size())
Error() << "track data overrun";
error("track data overrun");
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
fluxmap->appendBits(bits,
@@ -161,8 +162,7 @@ private:
const NorthstarEncoderProto& _config;
};
std::unique_ptr<Encoder> createNorthstarEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createNorthstarEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new NorthstarEncoder(config));
}

View File

@@ -1,7 +1,8 @@
#ifndef NORTHSTAR_H
#define NORTHSTAR_H
/* Northstar floppies are 10-hard sectored disks with a sector format as follows:
/* Northstar floppies are 10-hard sectored disks with a sector format as
* follows:
*
* |----------------------------------|
* | SYNC Byte | Payload | Checksum |
@@ -12,15 +13,19 @@
*
*/
#define NORTHSTAR_PREAMBLE_SIZE_SD (16)
#define NORTHSTAR_PREAMBLE_SIZE_DD (32)
#define NORTHSTAR_HEADER_SIZE_SD (1)
#define NORTHSTAR_HEADER_SIZE_DD (2)
#define NORTHSTAR_PAYLOAD_SIZE_SD (256)
#define NORTHSTAR_PAYLOAD_SIZE_DD (512)
#define NORTHSTAR_CHECKSUM_SIZE (1)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_SD (NORTHSTAR_HEADER_SIZE_SD + NORTHSTAR_PAYLOAD_SIZE_SD + NORTHSTAR_CHECKSUM_SIZE)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_DD (NORTHSTAR_HEADER_SIZE_DD + NORTHSTAR_PAYLOAD_SIZE_DD + NORTHSTAR_CHECKSUM_SIZE)
#define NORTHSTAR_PREAMBLE_SIZE_SD (16)
#define NORTHSTAR_PREAMBLE_SIZE_DD (32)
#define NORTHSTAR_HEADER_SIZE_SD (1)
#define NORTHSTAR_HEADER_SIZE_DD (2)
#define NORTHSTAR_PAYLOAD_SIZE_SD (256)
#define NORTHSTAR_PAYLOAD_SIZE_DD (512)
#define NORTHSTAR_CHECKSUM_SIZE (1)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_SD \
(NORTHSTAR_HEADER_SIZE_SD + NORTHSTAR_PAYLOAD_SIZE_SD + \
NORTHSTAR_CHECKSUM_SIZE)
#define NORTHSTAR_ENCODED_SECTOR_SIZE_DD \
(NORTHSTAR_HEADER_SIZE_DD + NORTHSTAR_PAYLOAD_SIZE_DD + \
NORTHSTAR_CHECKSUM_SIZE)
class Decoder;
class Encoder;
@@ -29,7 +34,9 @@ class DecoderProto;
extern uint8_t northstarChecksum(const Bytes& bytes);
extern std::unique_ptr<Decoder> createNorthstarDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createNorthstarEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createNorthstarDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createNorthstarEncoder(
const EncoderProto& config);
#endif /* NORTHSTAR */

View File

@@ -29,28 +29,23 @@ static const FluxPattern SECTOR_PATTERN(64, 0xed55555555555555LL);
class RolandD20Decoder : public Decoder
{
public:
RolandD20Decoder(const DecoderProto& config):
Decoder(config)
{}
RolandD20Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(SECTOR_PATTERN);
}
{
return seekToPattern(SECTOR_PATTERN);
}
void decodeSectorRecord() override
{
auto rawbits = readRawBits(256);
const auto& bytes = decodeFmMfm(rawbits);
fmt::print("{} ", _sector->clock);
hexdump(std::cout, bytes);
}
{
auto rawbits = readRawBits(256);
const auto& bytes = decodeFmMfm(rawbits);
fmt::print("{} ", _sector->clock);
hexdump(std::cout, bytes);
}
};
std::unique_ptr<Decoder> createRolandD20Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new RolandD20Decoder(config));
return std::unique_ptr<Decoder>(new RolandD20Decoder(config));
}

View File

@@ -1,4 +1,4 @@
#pragma once
extern std::unique_ptr<Decoder> createRolandD20Decoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createRolandD20Decoder(
const DecoderProto& config);

View File

@@ -7,4 +7,3 @@
extern std::unique_ptr<Decoder> createSmaky6Decoder(const DecoderProto& config);
#endif

View File

@@ -38,61 +38,63 @@ const FluxPattern SECTOR_RECORD_PATTERN(32, 0x11112244);
const uint16_t DATA_ID = 0x550b;
const FluxPattern DATA_RECORD_PATTERN(32, 0x11112245);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
class Tids990Decoder : public Decoder
{
public:
Tids990Decoder(const DecoderProto& config):
Decoder(config)
{}
Tids990Decoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
auto bits = readRawBits(TIDS990_SECTOR_RECORD_SIZE*16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_SECTOR_RECORD_SIZE);
{
auto bits = readRawBits(TIDS990_SECTOR_RECORD_SIZE * 16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_SECTOR_RECORD_SIZE);
ByteReader br(bytes);
if (br.read_be16() != SECTOR_ID)
return;
ByteReader br(bytes);
if (br.read_be16() != SECTOR_ID)
return;
uint16_t gotChecksum = crc16(CCITT_POLY, bytes.slice(1, TIDS990_SECTOR_RECORD_SIZE-3));
uint16_t gotChecksum =
crc16(CCITT_POLY, bytes.slice(1, TIDS990_SECTOR_RECORD_SIZE - 3));
_sector->logicalSide = br.read_8() >> 3;
_sector->logicalTrack = br.read_8();
br.read_8(); /* number of sectors per track */
_sector->logicalSector = br.read_8();
br.read_be16(); /* sector size */
uint16_t wantChecksum = br.read_be16();
_sector->logicalSide = br.read_8() >> 3;
_sector->logicalTrack = br.read_8();
br.read_8(); /* number of sectors per track */
_sector->logicalSector = br.read_8();
br.read_be16(); /* sector size */
uint16_t wantChecksum = br.read_be16();
if (wantChecksum == gotChecksum)
_sector->status = Sector::DATA_MISSING; /* correct but unintuitive */
}
if (wantChecksum == gotChecksum)
_sector->status =
Sector::DATA_MISSING; /* correct but unintuitive */
}
void decodeDataRecord() override
{
auto bits = readRawBits(TIDS990_DATA_RECORD_SIZE*16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_DATA_RECORD_SIZE);
void decodeDataRecord() override
{
auto bits = readRawBits(TIDS990_DATA_RECORD_SIZE * 16);
auto bytes = decodeFmMfm(bits).slice(0, TIDS990_DATA_RECORD_SIZE);
ByteReader br(bytes);
if (br.read_be16() != DATA_ID)
return;
ByteReader br(bytes);
if (br.read_be16() != DATA_ID)
return;
uint16_t gotChecksum = crc16(CCITT_POLY, bytes.slice(1, TIDS990_DATA_RECORD_SIZE-3));
uint16_t gotChecksum =
crc16(CCITT_POLY, bytes.slice(1, TIDS990_DATA_RECORD_SIZE - 3));
_sector->data = br.read(TIDS990_PAYLOAD_SIZE);
uint16_t wantChecksum = br.read_be16();
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(TIDS990_PAYLOAD_SIZE);
uint16_t wantChecksum = br.read_be16();
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createTids990Decoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Tids990Decoder(config));
return std::unique_ptr<Decoder>(new Tids990Decoder(config));
}

View File

@@ -127,14 +127,14 @@ public:
}
if (_cursor >= _bits.size())
Error() << "track data overrun";
error("track data overrun");
while (_cursor < _bits.size())
writeBytes(1, 0x55);
auto fluxmap = std::make_unique<Fluxmap>();
fluxmap->appendBits(_bits,
calculatePhysicalClockPeriod(clockRateUs * 1e3,
_config.rotational_period_ms() * 1e6));
calculatePhysicalClockPeriod(
clockRateUs * 1e3, _config.rotational_period_ms() * 1e6));
return fluxmap;
}
@@ -145,8 +145,7 @@ private:
bool _lastBit;
};
std::unique_ptr<Encoder> createTids990Encoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createTids990Encoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new Tids990Encoder(config));
}

View File

@@ -1,18 +1,18 @@
#ifndef TIDS990_H
#define TIDS990_H
#define TIDS990_PAYLOAD_SIZE 288 /* bytes */
#define TIDS990_SECTOR_RECORD_SIZE 10 /* bytes */
#define TIDS990_DATA_RECORD_SIZE (TIDS990_PAYLOAD_SIZE + 4) /* bytes */
#define TIDS990_PAYLOAD_SIZE 288 /* bytes */
#define TIDS990_SECTOR_RECORD_SIZE 10 /* bytes */
#define TIDS990_DATA_RECORD_SIZE (TIDS990_PAYLOAD_SIZE + 4) /* bytes */
class Encoder;
class Decoder;
class DecoderProto;
class EncoderProto;
extern std::unique_ptr<Decoder> createTids990Decoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createTids990Encoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createTids990Decoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createTids990Encoder(
const EncoderProto& config);
#endif

View File

@@ -13,16 +13,18 @@
const FluxPattern SECTOR_RECORD_PATTERN(32, VICTOR9K_SECTOR_RECORD);
const FluxPattern DATA_RECORD_PATTERN(32, VICTOR9K_DATA_RECORD);
const FluxMatchers ANY_RECORD_PATTERN({ &SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN });
const FluxMatchers ANY_RECORD_PATTERN(
{&SECTOR_RECORD_PATTERN, &DATA_RECORD_PATTERN});
static int decode_data_gcr(uint8_t gcr)
{
switch (gcr)
{
#define GCR_ENTRY(gcr, data) \
case gcr: return data;
#include "data_gcr.h"
#undef GCR_ENTRY
#define GCR_ENTRY(gcr, data) \
case gcr: \
return data;
#include "data_gcr.h"
#undef GCR_ENTRY
}
return -1;
}
@@ -37,11 +39,11 @@ static Bytes decode(const std::vector<bool>& bits)
while (ii != bits.end())
{
uint8_t inputfifo = 0;
for (size_t i=0; i<5; i++)
for (size_t i = 0; i < 5; i++)
{
if (ii == bits.end())
break;
inputfifo = (inputfifo<<1) | *ii++;
inputfifo = (inputfifo << 1) | *ii++;
}
uint8_t decoded = decode_data_gcr(inputfifo);
@@ -55,63 +57,62 @@ static Bytes decode(const std::vector<bool>& bits)
class Victor9kDecoder : public Decoder
{
public:
Victor9kDecoder(const DecoderProto& config):
Decoder(config)
{}
Victor9kDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
return seekToPattern(ANY_RECORD_PATTERN);
}
{
return seekToPattern(ANY_RECORD_PATTERN);
}
void decodeSectorRecord() override
{
/* Check the ID. */
{
/* Check the ID. */
if (readRaw32() != VICTOR9K_SECTOR_RECORD)
return;
if (readRaw32() != VICTOR9K_SECTOR_RECORD)
return;
/* Read header. */
/* Read header. */
auto bytes = decode(readRawBits(3*10)).slice(0, 3);
auto bytes = decode(readRawBits(3 * 10)).slice(0, 3);
uint8_t rawTrack = bytes[0];
_sector->logicalSector = bytes[1];
uint8_t gotChecksum = bytes[2];
uint8_t rawTrack = bytes[0];
_sector->logicalSector = bytes[1];
uint8_t gotChecksum = bytes[2];
_sector->logicalTrack = rawTrack & 0x7f;
_sector->logicalSide = rawTrack >> 7;
uint8_t wantChecksum = bytes[0] + bytes[1];
if ((_sector->logicalSector > 20) || (_sector->logicalTrack > 85) || (_sector->logicalSide > 1))
return;
if (wantChecksum == gotChecksum)
_sector->status = Sector::DATA_MISSING; /* unintuitive but correct */
}
_sector->logicalTrack = rawTrack & 0x7f;
_sector->logicalSide = rawTrack >> 7;
uint8_t wantChecksum = bytes[0] + bytes[1];
if ((_sector->logicalSector > 20) || (_sector->logicalTrack > 85) ||
(_sector->logicalSide > 1))
return;
if (wantChecksum == gotChecksum)
_sector->status =
Sector::DATA_MISSING; /* unintuitive but correct */
}
void decodeDataRecord() override
{
/* Check the ID. */
{
/* Check the ID. */
if (readRaw32() != VICTOR9K_DATA_RECORD)
return;
if (readRaw32() != VICTOR9K_DATA_RECORD)
return;
/* Read data. */
/* Read data. */
auto bytes = decode(readRawBits((VICTOR9K_SECTOR_LENGTH+4)*10))
.slice(0, VICTOR9K_SECTOR_LENGTH+4);
ByteReader br(bytes);
auto bytes = decode(readRawBits((VICTOR9K_SECTOR_LENGTH + 4) * 10))
.slice(0, VICTOR9K_SECTOR_LENGTH + 4);
ByteReader br(bytes);
_sector->data = br.read(VICTOR9K_SECTOR_LENGTH);
uint16_t gotChecksum = sumBytes(_sector->data);
uint16_t wantChecksum = br.read_le16();
_sector->status = (gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->data = br.read(VICTOR9K_SECTOR_LENGTH);
uint16_t gotChecksum = sumBytes(_sector->data);
uint16_t wantChecksum = br.read_le16();
_sector->status =
(gotChecksum == wantChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createVictor9kDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new Victor9kDecoder(config));
return std::unique_ptr<Decoder>(new Victor9kDecoder(config));
}

View File

@@ -169,14 +169,15 @@ public:
const Image& image) override
{
Victor9kEncoderProto::TrackdataProto trackdata;
getTrackFormat(trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
getTrackFormat(
trackdata, trackInfo->logicalTrack, trackInfo->logicalSide);
unsigned bitsPerRevolution = (trackdata.rotational_period_ms() * 1e3) /
trackdata.clock_period_us();
std::vector<bool> bits(bitsPerRevolution);
nanoseconds_t clockPeriod = calculatePhysicalClockPeriod(
trackdata.clock_period_us() * 1e3,
trackdata.rotational_period_ms() * 1e6);
nanoseconds_t clockPeriod =
calculatePhysicalClockPeriod(trackdata.clock_period_us() * 1e3,
trackdata.rotational_period_ms() * 1e6);
unsigned cursor = 0;
fillBitmapTo(bits,
@@ -189,8 +190,7 @@ public:
write_sector(bits, cursor, trackdata, *sector);
if (cursor >= bits.size())
Error() << fmt::format(
"track data overrun by {} bits", cursor - bits.size());
error("track data overrun by {} bits", cursor - bits.size());
fillBitmapTo(bits, cursor, bits.size(), {true, false});
std::unique_ptr<Fluxmap> fluxmap(new Fluxmap);
@@ -202,8 +202,7 @@ private:
const Victor9kEncoderProto& _config;
};
std::unique_ptr<Encoder> createVictor9kEncoder(
const EncoderProto& config)
std::unique_ptr<Encoder> createVictor9kEncoder(const EncoderProto& config)
{
return std::unique_ptr<Encoder>(new Victor9kEncoder(config));
}

View File

@@ -13,12 +13,14 @@ class DecoderProto;
/* ... 1101 0100 1001
* ^^ ^^^^ ^^^^ ten bit IO byte */
#define VICTOR9K_DATA_RECORD 0xfffffd49
#define VICTOR9K_DATA_RECORD 0xfffffd49
#define VICTOR9K_DATA_ID 0x8
#define VICTOR9K_SECTOR_LENGTH 512
extern std::unique_ptr<Decoder> createVictor9kDecoder(const DecoderProto& config);
extern std::unique_ptr<Encoder> createVictor9kEncoder(const EncoderProto& config);
extern std::unique_ptr<Decoder> createVictor9kDecoder(
const DecoderProto& config);
extern std::unique_ptr<Encoder> createVictor9kEncoder(
const EncoderProto& config);
#endif

View File

@@ -16,42 +16,40 @@ static const FluxPattern SECTOR_START_PATTERN(16, 0xaaab);
class ZilogMczDecoder : public Decoder
{
public:
ZilogMczDecoder(const DecoderProto& config):
Decoder(config)
{}
ZilogMczDecoder(const DecoderProto& config): Decoder(config) {}
nanoseconds_t advanceToNextRecord() override
{
seekToIndexMark();
return seekToPattern(SECTOR_START_PATTERN);
}
{
seekToIndexMark();
return seekToPattern(SECTOR_START_PATTERN);
}
void decodeSectorRecord() override
{
readRawBits(14);
{
readRawBits(14);
auto rawbits = readRawBits(140*16);
auto bytes = decodeFmMfm(rawbits).slice(0, 140);
ByteReader br(bytes);
auto rawbits = readRawBits(140 * 16);
auto bytes = decodeFmMfm(rawbits).slice(0, 140);
ByteReader br(bytes);
_sector->logicalSector = br.read_8() & 0x1f;
_sector->logicalSide = 0;
_sector->logicalTrack = br.read_8() & 0x7f;
if (_sector->logicalSector > 31)
return;
if (_sector->logicalTrack > 80)
return;
_sector->logicalSector = br.read_8() & 0x1f;
_sector->logicalSide = 0;
_sector->logicalTrack = br.read_8() & 0x7f;
if (_sector->logicalSector > 31)
return;
if (_sector->logicalTrack > 80)
return;
_sector->data = br.read(132);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(MODBUS_POLY, 0x0000, bytes.slice(0, 134));
_sector->data = br.read(132);
uint16_t wantChecksum = br.read_be16();
uint16_t gotChecksum = crc16(MODBUS_POLY, 0x0000, bytes.slice(0, 134));
_sector->status = (wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
_sector->status =
(wantChecksum == gotChecksum) ? Sector::OK : Sector::BAD_CHECKSUM;
}
};
std::unique_ptr<Decoder> createZilogMczDecoder(const DecoderProto& config)
{
return std::unique_ptr<Decoder>(new ZilogMczDecoder(config));
return std::unique_ptr<Decoder>(new ZilogMczDecoder(config));
}

View File

@@ -1,8 +1,7 @@
#ifndef ZILOGMCZ_H
#define ZILOGMCZ_H
extern std::unique_ptr<Decoder> createZilogMczDecoder(const DecoderProto& config);
extern std::unique_ptr<Decoder> createZilogMczDecoder(
const DecoderProto& config);
#endif

View File

@@ -17,6 +17,6 @@ $(ADFLIB_OBJS): CFLAGS += -Idep/adflib/src -Idep/adflib
ADFLIB_LIB = $(OBJDIR)/libadflib.a
$(ADFLIB_LIB): $(ADFLIB_OBJS)
ADFLIB_CFLAGS = -Idep/adflib/src
ADFLIB_LDFLAGS = $(ADFLIB_LIB)
ADFLIB_LDFLAGS =
OBJS += $(ADFLIB_OBJS)

View File

@@ -8,6 +8,6 @@ $(FATFS_OBJS): CFLAGS += -Idep/fatfs/source
FATFS_LIB = $(OBJDIR)/libfatfs.a
$(FATFS_LIB): $(FATFS_OBJS)
FATFS_CFLAGS = -Idep/fatfs/source
FATFS_LDFLAGS = $(FATFS_LIB)
FATFS_LDFLAGS =
OBJS += $(FATFS_OBJS)

View File

@@ -17,6 +17,6 @@ $(HFSUTILS_OBJS): CFLAGS += -Idep/hfsutils/libhfs
HFSUTILS_LIB = $(OBJDIR)/libhfsutils.a
$(HFSUTILS_LIB): $(HFSUTILS_OBJS)
HFSUTILS_CFLAGS = -Idep/hfsutils/libhfs
HFSUTILS_LDFLAGS = $(HFSUTILS_LIB)
HFSUTILS_LDFLAGS =
OBJS += $(HFSUTILS_OBJS)

View File

@@ -54,7 +54,7 @@ LIBUSBP_OBJS = $(patsubst %.c, $(OBJDIR)/%.o, $(LIBUSBP_SRCS))
$(LIBUSBP_OBJS): private CFLAGS += -Idep/libusbp/src -Idep/libusbp/include
LIBUSBP_LIB = $(OBJDIR)/libusbp.a
LIBUSBP_CFLAGS += -Idep/libusbp/include
LIBUSBP_LDFLAGS += $(LIBUSBP_LIB)
LIBUSBP_LDFLAGS +=
$(LIBUSBP_LIB): $(LIBUSBP_OBJS)
OBJS += $(LIBUSBP_OBJS)

View File

Binary file not shown.

View File

@@ -93,6 +93,22 @@ You're now looking at the _top_ of the board.
row of header sockets allowing you to plug the board directly onto the floppy
disk drive; for simplicity I'm leaving that as an exercise for the reader.)
### If you want to use a PCB
Alternatively, you can make an actual PCB!
<div style="text-align: center">
<a href="pcb.png"><img src="pcb.png" style="width:80%" alt="the PCB schematic"></a>
</div>
This is a passive breakout board designed to take a PSoC5 development board, a
standard 34-way PC connector, and a 50-way 8" drive connector. It was
contributed by a user --- thanks!
<a href="FluxEngine_eagle_pcb.zip">Download this to get it</a>. This package
contains the layout in Eagle format, a printable PDF of the PCB layout, and
gerbers suitable for sending off for manufacture.
### Grounding
You _also_ need to solder a wire between a handy GND pin on the board and
@@ -185,10 +201,14 @@ generic libusb stuff and should build and run on Windows, Linux and OSX as
well, although on Windows it'll need MSYS2 and mingw32. You'll need to
install some support packages.
- For Linux (this is Ubuntu, but this should apply to Debian too):
- For Linux with Ubuntu/Debian:
`libusb-1.0-0-dev`, `libsqlite3-dev`, `zlib1g-dev`,
`libudev-dev`, `protobuf-compiler`, `libwxgtk3.0-gtk3-dev`,
`libfmt-dev`.
- For Linux with Fedora/Red Hat:
`git`, `make`, `gcc`, `gcc-c++`, `xxd`, `protobuf-compiler`,
`protobuf-devel`, `fmt-devel`, `systemd-devel`, `wxGTK3-devel`,
`libsqlite3x-devel`
- For OSX with Homebrew: `libusb`, `pkg-config`, `sqlite`,
`protobuf`, `truncate`, `wxwidgets`, `fmt`.
- For Windows with MSYS2: `make`, `mingw-w64-i686-libusb`,

View File

@@ -1,4 +1,15 @@
40track_drive
====
## Adjust configuration for a 40-track drive
<!-- This file is automatically generated. Do not edit. -->
# Adjust configuration for a 40-track drive
(This format has no documentation. Please file a bug.)
This is an extension profile; adding this to the command line will configure
FluxEngine to read from 40-track, 48tpi 5.25" drives. You have to tell it because there is
no way to detect this automatically.
For example:
```
fluxengine read ibm --180 40track_drive
```

View File

@@ -1,5 +1,7 @@
acornadfs
====
## BBC Micro, Archimedes
<!-- This file is automatically generated. Do not edit. -->
# BBC Micro, Archimedes
Acorn ADFS disks are used by the 6502-based BBC Micro and ARM-based Archimedes
series of computers. They are yet another variation on MFM encoded IBM scheme
@@ -18,5 +20,20 @@ they might require nudging as the side order can't be reliably autodetected.
## Options
(no options)
- Format variants:
- `160`: 160kB 3.5" or 5.25" 40-track SSDD; S format
- `320`: 320kB 3.5" or 5.25" 80-track SSDD; M format
- `640`: 640kB 3.5" or 5.25" 80-track DSDD; L format
- `800`: 800kB 3.5" 80-track DSDD; D and E formats
- `1600`: 1600kB 3.5" 80-track DSHD; F formats
## Examples
To read:
- `fluxengine read acornadfs --160 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --320 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --640 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --800 -s drive:0 -o acornadfs.img`
- `fluxengine read acornadfs --1600 -s drive:0 -o acornadfs.img`

View File

@@ -1,5 +1,7 @@
acorndfs
====
## Acorn Atom, BBC Micro series
<!-- This file is automatically generated. Do not edit. -->
# Acorn Atom, BBC Micro series
Acorn DFS disks are used by the Acorn Atom and BBC Micro series of computers.
They are pretty standard FM encoded IBM scheme disks, with 256-sectors and
@@ -14,7 +16,21 @@ requires a bit of fiddling as they have the same tracks on twice.
## Options
(no options)
- Format variants:
- `100`: 100kB 40-track SSSD
- `200`: 200kB 80-track SSSD
## Examples
To read:
- `fluxengine read acorndfs --100 -s drive:0 -o acorndfs.img`
- `fluxengine read acorndfs --200 -s drive:0 -o acorndfs.img`
To write:
- `fluxengine write acorndfs --100 -d drive:0 -i acorndfs.img`
- `fluxengine write acorndfs --200 -d drive:0 -i acorndfs.img`
## References

View File

@@ -1,5 +1,7 @@
aeslanier
====
## 616kB 5.25" 77-track SSDD hard sectored
<!-- This file is automatically generated. Do not edit. -->
# 616kB 5.25" 77-track SSDD hard sectored
Back in 1980 Lanier released a series of very early integrated word processor
appliances, the No Problem. These were actually [rebranded AES Data Superplus
@@ -31,6 +33,12 @@ based on what looks right. If anyone knows _anything_ about these disks,
(no options)
## Examples
To read:
- `fluxengine read aeslanier -s drive:0 -o aeslanier.img`
## References
* [SA800 Diskette Storage Drive - Theory Of

View File

@@ -1,5 +1,7 @@
agat
====
## 840kB 5.25" 80-track DS
<!-- This file is automatically generated. Do not edit. -->
# 840kB 5.25" 80-track DS
The Agat (Russian: ↊fd74
1983. These were based around a 6502 and were nominally Apple II-compatible
@@ -14,6 +16,16 @@ profile.
(no options)
## Examples
To read:
- `fluxengine read agat -s drive:0 -o agat.img`
To write:
- `fluxengine write agat -d drive:0 -i agat.img`
## References
- [Magazine article on the

View File

@@ -1,5 +1,7 @@
amiga
====
## 880kB 3.5" DSDD
<!-- This file is automatically generated. Do not edit. -->
# 880kB 3.5" DSDD
Amiga disks use MFM, but don't use IBM scheme. Instead, the entire track is
read and written as a unit, with each sector butting up against the previous
@@ -16,7 +18,19 @@ distinctly subpar and not particularly good at detecting errors.
## Options
(no options)
- Sector size:
- `without_metadata`: 512-byte sectors
- `with_metadata`: 528-byte sectors
## Examples
To read:
- `fluxengine read amiga -s drive:0 -o amiga.adf`
To write:
- `fluxengine write amiga -d drive:0 -i amiga.adf`
## References

View File

@@ -1,5 +1,7 @@
ampro
====
## CP/M
<!-- This file is automatically generated. Do not edit. -->
# CP/M
The Ampro Little Board was a very simple and cheap Z80-based computer from
1984, which ran CP/M. It was, in fact, a single PCB which you could mount
@@ -33,7 +35,16 @@ kayinfo.lbr
## Options
(no options)
- Format variants:
- `400`: 400kB 40-track DSDD
- `800`: 800kB 80-track DSDD
## Examples
To read:
- `fluxengine read ampro --400 -s drive:0 -o ampro.img`
- `fluxengine read ampro --800 -s drive:0 -o ampro.img`
## References

View File

@@ -1,5 +1,7 @@
apple2
====
## Prodos, Appledos, and CP/M
<!-- This file is automatically generated. Do not edit. -->
# Prodos, Appledos, and CP/M
Apple II disks are nominally fairly sensible 40-track, single-sided, 256
bytes-per-sector jobs. However, they come in two varieties: DOS 3.3/ProDOS and
@@ -42,7 +44,7 @@ volume.
## Options
- Format variant:
- Format variants:
- `140`: 140kB 5.25" 35-track SS
- `640`: 640kB 5.25" 80-track DS
- Filesystem and sector skew:
@@ -52,6 +54,18 @@ volume.
- `cpm`: use CP/M soft sector skew and filesystem
- `side1`: for AppleDOS file system access, read the volume on side 1 of a disk
## Examples
To read:
- `fluxengine read apple2 --140 -s drive:0 -o apple2.img`
- `fluxengine read apple2 --640 -s drive:0 -o apple2.img`
To write:
- `fluxengine write apple2 --140 -d drive:0 -i apple2.img`
- `fluxengine write apple2 --640 -d drive:0 -i apple2.img`
## References
- [Beneath Apple DOS](https://fabiensanglard.net/fd_proxy/prince_of_persia/Beneath%20Apple%20DOS.pdf)

View File

@@ -1,4 +1,16 @@
apple2_drive
====
## Adjust configuration for a 40-track Apple II drive
<!-- This file is automatically generated. Do not edit. -->
# Adjust configuration for a 40-track Apple II drive
(This format has no documentation. Please file a bug.)
This is an extension profile; adding this to the command line will configure
FluxEngine to adjust the pinout and track spacing to work with an Apple II
drive. This only works on Greaseweazle hardware and requires a custom
connector.
For example:
```
fluxengine read apple2 --160 apple2_drive
```

View File

@@ -1,5 +1,7 @@
atarist
====
## Almost PC compatible
<!-- This file is automatically generated. Do not edit. -->
# Almost PC compatible
Atari ST disks are standard MFM encoded IBM scheme disks without an IAM header.
Disks are typically formatted 512 bytes per sector with between 9-10 (sometimes
@@ -13,7 +15,39 @@ Be aware that many PC drives (including mine) won't do the 82 track formats.
## Options
(no options)
- Format variants:
- `360`: 360kB 3.5" 80-track 9-sector SSDD
- `370`: 370kB 3.5" 82-track 9-sector SSDD
- `400`: 400kB 3.5" 80-track 10-sector SSDD
- `410`: 410kB 3.5" 82-track 10-sector SSDD
- `720`: 720kB 3.5" 80-track 9-sector DSDD
- `740`: 740kB 3.5" 82-track 9-sector DSDD
- `800`: 800kB 3.5" 80-track 10-sector DSDD
- `820`: 820kB 3.5" 82-track 10-sector DSDD
## Examples
To read:
- `fluxengine read atarist --360 -s drive:0 -o atarist.img`
- `fluxengine read atarist --370 -s drive:0 -o atarist.img`
- `fluxengine read atarist --400 -s drive:0 -o atarist.img`
- `fluxengine read atarist --410 -s drive:0 -o atarist.img`
- `fluxengine read atarist --720 -s drive:0 -o atarist.img`
- `fluxengine read atarist --740 -s drive:0 -o atarist.img`
- `fluxengine read atarist --800 -s drive:0 -o atarist.img`
- `fluxengine read atarist --820 -s drive:0 -o atarist.img`
To write:
- `fluxengine write atarist --360 -d drive:0 -i atarist.img`
- `fluxengine write atarist --370 -d drive:0 -i atarist.img`
- `fluxengine write atarist --400 -d drive:0 -i atarist.img`
- `fluxengine write atarist --410 -d drive:0 -i atarist.img`
- `fluxengine write atarist --720 -d drive:0 -i atarist.img`
- `fluxengine write atarist --740 -d drive:0 -i atarist.img`
- `fluxengine write atarist --800 -d drive:0 -i atarist.img`
- `fluxengine write atarist --820 -d drive:0 -i atarist.img`
## References

View File

@@ -1,5 +1,7 @@
bk
====
## 800kB 5.25"/3.5" 80-track 10-sector DSDD
<!-- This file is automatically generated. Do not edit. -->
# 800kB 5.25"/3.5" 80-track 10-sector DSDD
The BK (an abbreviation for 1ba9
is a Soviet era personal computer from Elektronika based on a PDP-11
@@ -16,3 +18,13 @@ on what was available at the time, with the same format on both.
(no options)
## Examples
To read:
- `fluxengine read bk -s drive:0 -o bk800.img`
To write:
- `fluxengine write bk -d drive:0 -i bk800.img`

View File

@@ -1,5 +1,7 @@
brother
====
## GCR family
<!-- This file is automatically generated. Do not edit. -->
# GCR family
Brother word processor disks are weird, using custom tooling and chipsets.
They are completely not PC compatible in every possible way other than the
@@ -34,7 +36,21 @@ investigate.
## Options
(no options)
- Format variants:
- `120`: 120kB 3.5" 39-track SS GCR
- `240`: 240kB 3.5" 78-track SS GCR
## Examples
To read:
- `fluxengine read brother --120 -s drive:0 -o brother.img`
- `fluxengine read brother --240 -s drive:0 -o brother.img`
To write:
- `fluxengine write brother --120 -d drive:0 -i brother.img`
- `fluxengine write brother --240 -d drive:0 -i brother.img`
Dealing with misaligned disks
-----------------------------

View File

@@ -1,5 +1,7 @@
commodore
====
## 1541, 1581, 8050 and variations
<!-- This file is automatically generated. Do not edit. -->
# 1541, 1581, 8050 and variations
Commodore 8-bit computer disks come in two varieties: GCR, which are the
overwhelming majority; and MFM, only used on the 1571 and 1581. The latter were
@@ -41,7 +43,29 @@ A CMD FD2000 disk (a popular third-party Commodore disk drive)
## Options
(no options)
- Format variants:
- `171`: 171kB 1541, 35-track variant
- `192`: 192kB 1541, 40-track variant
- `800`: 800kB 3.5" 1581
- `1042`: 1042kB 5.25" 8051
- `1620`: 1620kB, CMD FD2000
## Examples
To read:
- `fluxengine read commodore --171 -s drive:0 -o commodore.d64`
- `fluxengine read commodore --192 -s drive:0 -o commodore.d64`
- `fluxengine read commodore --800 -s drive:0 -o commodore.d64`
- `fluxengine read commodore --1042 -s drive:0 -o commodore.d64`
- `fluxengine read commodore --1620 -s drive:0 -o commodore.d64`
To write:
- `fluxengine write commodore --171 -d drive:0 -i commodore.d64`
- `fluxengine write commodore --192 -d drive:0 -i commodore.d64`
- `fluxengine write commodore --800 -d drive:0 -i commodore.d64`
- `fluxengine write commodore --1620 -d drive:0 -i commodore.d64`
## References

View File

@@ -1,5 +1,7 @@
eco1
====
## CP/M; 1210kB 77-track mixed format DSHD
<!-- This file is automatically generated. Do not edit. -->
# CP/M; 1210kB 77-track mixed format DSHD
The Eco1 is a Italian CP/M machine produced in 1982. It had 64kB of RAM, in
later models expandable up to 384kB, and _two_ Z80 processors. One of these was
@@ -27,6 +29,12 @@ images.
(no options)
## Examples
To read:
- `fluxengine read eco1 -s drive:0 -o eco1.img`
## References
- [Apulio Retrocomputing's page on the

View File

@@ -1,5 +1,7 @@
epsonpf10
====
## CP/M; 3.5" 40-track DSDD
<!-- This file is automatically generated. Do not edit. -->
# CP/M; 3.5" 40-track DSDD
The Epson PF10 is the disk unit for the Epson Z80 series of 'laptops', running
CP/M. It uses a single-sided 40-track 3.5" format, which is unusual, but the
@@ -9,3 +11,9 @@ format itself is yet another IBM scheme variant.
(no options)
## Examples
To read:
- `fluxengine read epsonpf10 -s drive:0 -o epsonpf10.img`

View File

@@ -1,5 +1,7 @@
f85
====
## 461kB 5.25" 77-track SS
<!-- This file is automatically generated. Do not edit. -->
# 461kB 5.25" 77-track SS
The Durango F85 was an early office computer based around a 5MHz 8085 processor,
sold in 1977. It had an impressive 64kB of RAM, upgradable to 128kB, and ran
@@ -30,6 +32,12 @@ touch](https://github.com/davidgiven/fluxengine/issues/new).
(no options)
## Examples
To read:
- `fluxengine read f85 -s drive:0 -o f85.img`
## References
There's amazingly little information about these things.

View File

@@ -1,5 +1,7 @@
fb100
====
## 100kB 3.5" 40-track SSSD
<!-- This file is automatically generated. Do not edit. -->
# 100kB 3.5" 40-track SSSD
The Brother FB-100 is a serial-attached smart floppy drive used by a several
different machines for mass storage, including the Tandy Model 100 and
@@ -24,6 +26,12 @@ I don't have access to one of those disks.
(no options)
## Examples
To read:
- `fluxengine read fb100 -s drive:0 -o fb100.img`
## References
- [Tandy Portable Disk Drive operations

View File

@@ -1,5 +1,7 @@
hplif
====
## a variety of disk formats used by HP
<!-- This file is automatically generated. Do not edit. -->
# a variety of disk formats used by HP
LIF, a.k.a. Logical Interchange Format, is a series of formats used by
Hewlett-Packard across their entire range of computers, from calculators to
@@ -11,5 +13,22 @@ encoding scheme.
## Options
(no options)
- Format variants:
- `264`: 264kB 3.5" 66-track SSDD; HP9121 format
- `616`: 616kB 3.5" 77-track DSDD
- `770`: 770kB 3.5" 77-track DSDD
## Examples
To read:
- `fluxengine read hplif --264 -s drive:0 -o hplif.img`
- `fluxengine read hplif --616 -s drive:0 -o hplif.img`
- `fluxengine read hplif --770 -s drive:0 -o hplif.img`
To write:
- `fluxengine write hplif --264 -d drive:0 -i hplif.img`
- `fluxengine write hplif --616 -d drive:0 -i hplif.img`
- `fluxengine write hplif --770 -d drive:0 -i hplif.img`

View File

@@ -1,5 +1,7 @@
ibm
====
## Generic PC 3.5"/5.25" disks
<!-- This file is automatically generated. Do not edit. -->
# Generic PC 3.5"/5.25" disks
IBM scheme disks are _the_ most common disk format, ever. They're used by a
huge variety of different systems, and they come in a huge variety of different
@@ -36,7 +38,47 @@ image format. FluxEngine will use these parameters.
## Options
(no options)
- Format variants:
- `auto`: try to autodetect the format (unreliable)
- `160`: 160kB 5.25" 40-track 8-sector SSDD
- `180`: 180kB 5.25" 40-track 9-sector SSDD
- `320`: 320kB 5.25" 40-track 8-sector DSDD
- `360`: 360kB 5.25" 40-track 9-sector DSDD
- `720_96`: 720kB 5.25" 80-track 9-sector DSDD
- `720_135`: 720kB 3.5" 80-track 9-sector DSDD
- `1200`: 1200kB 5.25" 80-track 15-sector DSHD
- `1232`: 1232kB 5.25" 77-track 8-sector DSHD
- `1440`: 1440kB 3.5" 80-track 18-sector DSHD
- `1680`: 1680kB 3.5" 80-track 21-sector DSHD; DMF
## Examples
To read:
- `fluxengine read ibm --auto -s drive:0 -o ibm.img`
- `fluxengine read ibm --160 -s drive:0 -o ibm.img`
- `fluxengine read ibm --180 -s drive:0 -o ibm.img`
- `fluxengine read ibm --320 -s drive:0 -o ibm.img`
- `fluxengine read ibm --360 -s drive:0 -o ibm.img`
- `fluxengine read ibm --720_96 -s drive:0 -o ibm.img`
- `fluxengine read ibm --720_135 -s drive:0 -o ibm.img`
- `fluxengine read ibm --1200 -s drive:0 -o ibm.img`
- `fluxengine read ibm --1232 -s drive:0 -o ibm.img`
- `fluxengine read ibm --1440 -s drive:0 -o ibm.img`
- `fluxengine read ibm --1680 -s drive:0 -o ibm.img`
To write:
- `fluxengine write ibm --160 -d drive:0 -i ibm.img`
- `fluxengine write ibm --180 -d drive:0 -i ibm.img`
- `fluxengine write ibm --320 -d drive:0 -i ibm.img`
- `fluxengine write ibm --360 -d drive:0 -i ibm.img`
- `fluxengine write ibm --720_96 -d drive:0 -i ibm.img`
- `fluxengine write ibm --720_135 -d drive:0 -i ibm.img`
- `fluxengine write ibm --1200 -d drive:0 -i ibm.img`
- `fluxengine write ibm --1232 -d drive:0 -i ibm.img`
- `fluxengine write ibm --1440 -d drive:0 -i ibm.img`
- `fluxengine write ibm --1680 -d drive:0 -i ibm.img`
Mixed-format disks
------------------

View File

@@ -1,5 +1,7 @@
icl30
====
## CP/M; 263kB 35-track DSSD
<!-- This file is automatically generated. Do not edit. -->
# CP/M; 263kB 35-track DSSD
The ICL Model 30 is a reasonably standard CP/M machine using 35-track single
density disks and the traditional CP/M 128-byte secotrs --- 30 of them per
@@ -9,3 +11,9 @@ track! Other than that it's another IBM scheme variation.
(no options)
## Examples
To read:
- `fluxengine read icl30 -s drive:0 -o icl30.img`

View File

@@ -1,5 +1,7 @@
mac
====
## 400kB/800kB 3.5" GCR
<!-- This file is automatically generated. Do not edit. -->
# 400kB/800kB 3.5" GCR
Macintosh disks come in two varieties: the newer 1440kB ones, which are
perfectly ordinary PC disks you should use the `ibm` profile to read them, and
@@ -36,11 +38,23 @@ standard for disk images is to omit it. If you want them, specify that you want
## Options
- Format variant:
- Format variants:
- `400`: 400kB 80-track SSDD
- `800`: 800kB 80-track DSDD
- `metadata`: read/write 524 byte sectors
## Examples
To read:
- `fluxengine read mac --400 -s drive:0 -o mac.dsk`
- `fluxengine read mac --800 -s drive:0 -o mac.dsk`
To write:
- `fluxengine write mac --400 -d drive:0 -i mac.dsk`
- `fluxengine write mac --800 -d drive:0 -i mac.dsk`
## References
- [MAME's ap_dsk35.cpp file](https://github.com/mamedev/mame/blob/4263a71e64377db11392c458b580c5ae83556bc7/src/lib/formats/ap_dsk35.cpp),

View File

@@ -1,5 +1,7 @@
micropolis
====
## 100tpi MetaFloppy disks
<!-- This file is automatically generated. Do not edit. -->
# 100tpi MetaFloppy disks
Micropolis MetaFloppy disks use MFM and hard sectors. Mod I was 48 TPI and
stored 143k per side. Mod II was 100 TPI and stored 315k per side. Each of the
@@ -45,6 +47,16 @@ need to apply extra options to change the format if desired.
- `630`: 630kB 5.25" DSDD hard-sectored; Micropolis MetaFloppy Mod II
- `vgi`: Read/write VGI format images with 275 bytes per sector
## Examples
To read:
- `fluxengine read micropolis -s drive:0 -o micropolis.img`
To write:
- `fluxengine write micropolis -d drive:0 -i micropolis.img`
## References
- [Micropolis 1040/1050 S-100 Floppy Disk Subsystems User's Manual][micropolis1040/1050].

View File

@@ -1,5 +1,7 @@
mx
====
## Soviet-era PDP-11 clone
<!-- This file is automatically generated. Do not edit. -->
# Soviet-era PDP-11 clone
The DVK (in Russian, 沾7d65
Computing Complex) was a late 1970s Soviet personal computer, a cut-down
@@ -40,7 +42,20 @@ Words are all stored little-endian.
## Options
(no options)
- Format variants:
- `110`: 110kB 5.25" 40-track SSSD
- `220ds`: 220kB 5.25" 40-track DSSD
- `220ss`: 220kB 5.25" 80-track SSSD
- `440`: 440kB 5.25" 80-track DSSD
## Examples
To read:
- `fluxengine read mx --110 -s drive:0 -o mx.img`
- `fluxengine read mx --220ds -s drive:0 -o mx.img`
- `fluxengine read mx --220ss -s drive:0 -o mx.img`
- `fluxengine read mx --440 -s drive:0 -o mx.img`
## References

View File

@@ -1,5 +1,7 @@
n88basic
====
## PC8800/PC98 5.25" 77-track 26-sector DSHD
<!-- This file is automatically generated. Do not edit. -->
# PC8800/PC98 5.25"/3.5" 77-track 26-sector DSHD
The N88-BASIC disk format is the one used by the operating system of the same
name for the Japanese PC8800 and PC98 computers. It is another IBM scheme
@@ -12,3 +14,13 @@ boot ROM could only read single density data.)
(no options)
## Examples
To read:
- `fluxengine read n88basic -s drive:0 -o n88basic.img`
To write:
- `fluxengine write n88basic -d drive:0 -i n88basic.img`

View File

@@ -1,5 +1,7 @@
northstar
====
## 5.25" hard sectored
<!-- This file is automatically generated. Do not edit. -->
# 5.25" hard sectored
Northstar Floppy disks use 10-sector hard sectored disks with either FM or MFM
encoding. They may be single- or double-sided. Each of the 10 sectors contains
@@ -20,7 +22,24 @@ equivalent to .img images.
## Options
(no options)
- Format variants:
- `87`: 87.5kB 5.25" 35-track SSSD hard-sectored
- `175`: 175kB 5.25" 40-track SSDD hard-sectored
- `350`: 350kB 5.25" 40-track DSDD hard-sectored
## Examples
To read:
- `fluxengine read northstar --87 -s drive:0 -o northstar.nsi`
- `fluxengine read northstar --175 -s drive:0 -o northstar.nsi`
- `fluxengine read northstar --350 -s drive:0 -o northstar.nsi`
To write:
- `fluxengine write northstar --87 -d drive:0 -i northstar.nsi`
- `fluxengine write northstar --175 -d drive:0 -i northstar.nsi`
- `fluxengine write northstar --350 -d drive:0 -i northstar.nsi`
## References

View File

@@ -1,5 +1,7 @@
psos
====
## 800kB DSDD with PHILE
<!-- This file is automatically generated. Do not edit. -->
# 800kB DSDD with PHILE
pSOS was an influential real-time operating system from the 1980s, used mainly
on 68000-based machines, lasting up until about 2000 when it was bought (and
@@ -18,3 +20,13 @@ and, oddly, swapped sides.
(no options)
## Examples
To read:
- `fluxengine read psos -s drive:0 -o pme.img`
To write:
- `fluxengine write psos -d drive:0 -i pme.img`

View File

@@ -1,5 +1,7 @@
rolandd20
====
## 3.5" electronic synthesiser disks
<!-- This file is automatically generated. Do not edit. -->
# 3.5" electronic synthesiser disks
The Roland D20 is a classic electronic synthesiser with a built-in floppy
drive, used for saving MIDI sequences and samples.
@@ -7,9 +9,13 @@ drive, used for saving MIDI sequences and samples.
Weirdly, it seems to use precisely the same format as the Brother word
processors: a thoroughly non-IBM-compatible custom GCR system.
FluxEngine pretends to support this, but it has had almost no testing, the only
disk image I have seen for it was mostly corrupt, and very little is known
about the format, so I have no idea whether it's correct or not.
FluxEngine supports both reading and writing D20 disks, as well as basic support
for the filesystem, allowing files to be read from and written to D20 disks.
Note that the D20 was never intended to support arbitrary files on its disks and
is very likely to crash if you put unexpected files on a disk. In addition,
while the file format itself is currently unknown, there is a header at the top
of the file containing what appears to be the name shown in the D20 file
browser, so the name by which you see it is not necessarily the filename.
Please [get in touch](https://github.com/davidgiven/fluxengine/issues/new) if
you know anything about it.
@@ -18,3 +24,13 @@ you know anything about it.
(no options)
## Examples
To read:
- `fluxengine read rolandd20 -s drive:0 -o rolandd20.img`
To write:
- `fluxengine write rolandd20 -d drive:0 -i rolandd20.img`

View File

@@ -1,5 +1,7 @@
rx50
====
## 400kB 5.25" 80-track 10-sector SSDD
<!-- This file is automatically generated. Do not edit. -->
# 400kB 5.25" 80-track 10-sector SSDD
The Digital RX50 is one of the external floppy drive units used by Digital's
range of computers, especially the DEC Rainbow microcomputer. It is a fairly
@@ -9,3 +11,13 @@ vanilla single-sided IBM scheme variation.
(no options)
## Examples
To read:
- `fluxengine read rx50 -s drive:0 -o rx50.img`
To write:
- `fluxengine write rx50 -d drive:0 -i rx50.img`

View File

@@ -1,4 +1,15 @@
shugart_drive
====
## Adjust configuration for a Shugart drive
<!-- This file is automatically generated. Do not edit. -->
# Adjust configuration for a Shugart drive
(This format has no documentation. Please file a bug.)
This is an extension profile; adding this to the command line will configure
FluxEngine to adjust the pinout to work with a Shugart drive. This only works
on Greaseweazle hardware.
For example:
```
fluxengine read ibm --720 shugart_drive
```

View File

@@ -1,5 +1,7 @@
smaky6
====
## 308kB 5.25" 77-track 16-sector SSDD, hard sectored
<!-- This file is automatically generated. Do not edit. -->
# 308kB 5.25" 77-track 16-sector SSDD, hard sectored
The Smaky 6 is a Swiss computer from 1978 produced by Epsitec. It's based
around a Z80 processor and has one or two Micropolis 5.25" drives which use
@@ -20,6 +22,12 @@ this is completely correct, so don't trust it!
(no options)
## Examples
To read:
- `fluxengine read smaky6 -s drive:0 -o smaky6.img`
## References
- [Smaky Info, 1978-2002 (in French)](https://www.smaky.ch/theme.php?id=sminfo)

View File

@@ -1,5 +1,7 @@
tids990
====
## 1126kB 8" DSSD
<!-- This file is automatically generated. Do not edit. -->
# 1126kB 8" DSSD
The Texas Instruments DS990 was a multiuser modular computing system from 1998,
based around the TMS-9900 processor (as used by the TI-99). It had an 8" floppy
@@ -20,6 +22,16 @@ FluxEngine will read and write these (but only the DSDD MFM variant).
(no options)
## Examples
To read:
- `fluxengine read tids990 -s drive:0 -o tids990.img`
To write:
- `fluxengine write tids990 -d drive:0 -i tids990.img`
## References
- [The FD1000 Depot Maintenance

View File

@@ -1,5 +1,7 @@
tiki
====
## CP/M
<!-- This file is automatically generated. Do not edit. -->
# CP/M
The Tiki 100 is a Z80-based Norwegian microcomputer from the mid 1980s intended
for eductional use. It mostly ran an unbranded CP/M clone, and uses fairly
@@ -8,5 +10,18 @@ on the precise format.
## Options
(no options)
- Format variants:
- `90`: 90kB 40-track 18-sector SSSD
- `200`: 200kB 40-track 10-sector SSDD
- `400`: 400kB 40-track 10-sector DSDD
- `800`: 800kB 80-track 10-sector DSDD
## Examples
To read:
- `fluxengine read tiki --90 -s drive:0 -o tiki.img`
- `fluxengine read tiki --200 -s drive:0 -o tiki.img`
- `fluxengine read tiki --400 -s drive:0 -o tiki.img`
- `fluxengine read tiki --800 -s drive:0 -o tiki.img`

View File

@@ -1,5 +1,7 @@
victor9k
====
## 1224kB 5.25" DSDD GCR
<!-- This file is automatically generated. Do not edit. -->
# 1224kB 5.25" DSDD GCR
The Victor 9000 / Sirius One was a rather strange old 8086-based machine
which used a disk format very reminiscent of the Commodore format; not a
@@ -36,7 +38,21 @@ FluxEngine can read and write both the single-sided and double-sided variants.
## Options
(no options)
- Format variants:
- `612`: 612kB 80-track DSHD GCR
- `1224`: 1224kB 80-track DSHD GCR
## Examples
To read:
- `fluxengine read victor9k --612 -s drive:0 -o victor9k.img`
- `fluxengine read victor9k --1224 -s drive:0 -o victor9k.img`
To write:
- `fluxengine write victor9k --612 -d drive:0 -i victor9k.img`
- `fluxengine write victor9k --1224 -d drive:0 -i victor9k.img`
## References

View File

@@ -1,5 +1,7 @@
zilogmcz
====
## 320kB 8" 77-track SSSD hard-sectored
<!-- This file is automatically generated. Do not edit. -->
# 320kB 8" 77-track SSSD hard-sectored
The Zilog MCZ is an extremely early Z80 development system, produced by
Zilog, which came out in 1976. It used twin 8-inch hard sectored floppy
@@ -18,16 +20,19 @@ bytes per sector --- 128 bytes of user payload plus two two-byte metadata
words used to construct linked lists of sectors for storing files. These
stored 320kB each.
FluxEngine has experimental read support for these disks, based on a single
Catweasel flux file I've been able to obtain, which only contained 70 tracks.
I haven't been able to try this for real. If anyone has any of these disks,
an 8-inch drive, a FluxEngine and the appropriate adapter, please [get in
touch](https://github.com/davidgiven/fluxengine/issues/new)...
FluxEngine has read support for these, including support for RIO's ZDOS file
system.
## Options
(no options)
## Examples
To read:
- `fluxengine read zilogmcz -s drive:0 -o zilogmcz.img`
## References
* [About the Zilog MCZ](http://www.retrotechnology.com/restore/zilog.html),

View File

@@ -39,10 +39,10 @@ If you actually have a forty track drive, you need to tell FluxEngine. This is
done by adding the special profile `40track_drive`:
```
fluxengine write ibm360 40track_drive -i image.img -d drive:0
fluxengine write ibm --360 40track_drive -i image.img -d drive:0
```
It should then Just Work. This is supported by both FluxEngine and GreaseWeazle
It should then Just Work. This is supported by both FluxEngine and Greaseweazle
hardware.
Obviously you can't write an eighty-track format using a forty-track drive!
@@ -63,7 +63,7 @@ The FluxEngine client supports these with the `apple2_drive` profile:
fluxengine write apple2 apple2_drive -i image.img -d drive:0
```
This is supported only by GreaseWeazle hardware.
This is supported only by Greaseweazle hardware.
Shugart drives
--------------
@@ -86,5 +86,5 @@ fluxengine write atarist720 shugart_drive -i image.img -d drive:0
(If you have a 40-track Shugart drive, use _both_ `shugart_drive` and
`40track_drive`.)
This is supported only by GreaseWeazle hardware.
This is supported only by Greaseweazle hardware.

View File

@@ -17,13 +17,16 @@ The following file systems are supported so far.
|:-----------------------------------------|:-----:|:------:|-------|
| Acorn DFS | Y | | |
| Amiga FFS | Y | Y | Both OFS and FFS |
| AppleDOS / ProDOS | Y | Y | With a choice of sector remapping |
| Brother 120kB | Y | Y | |
| Commodore CbmFS | Y | | Only 1541 disks so far |
| CP/M | Y | | Requires configuration for each machine |
| FatFS (a.k.a. MS-DOS) | Y | Y | FAT12, FAT16, FAT32; not Atari (AFAIK!) |
| Hewlett-Packard LIF | Y | | |
| Macintosh HFS | Y | Y | Only AppleDouble files may be written |
| Apple ProDOS | Y | | |
| pSOS' PHILE | Y | | Probably unreliable due to lack of documentation |
| Smaky 6 | Y | | |
| Zilog MCZ RIO's ZDOS | Y | | |
{: .datatable }
Please not that Atari disks do _not_ use standard FatFS, and the library I'm
@@ -38,19 +41,19 @@ Using it
To use, try syntax like this:
```
fluxengine ls ibm180 -f drive:0
fluxengine ls ibm --180 -f drive:0
```
`ibm180` is the format, which selects the most common filesystem automatically.
`-f drive:0` specifies a flux source/sink, in this case a real disk. You may
also specify a flux file (read only). Disk images may be specified with `-i
disk.img` (read/write).
`ibm --180` is the format, which selects the most common filesystem
automatically. `-f drive:0` specifies a flux source/sink, in this case a real
disk. You may also specify a flux file (read only). Disk images may be
specified with `-i disk.img` (read/write).
Commands which take filename paramaters typically use `-p` to indicate the path
on the disk, and `-l` for the local filename. For example:
```
fluxengine putfile ibm180 -f drive:0 -p ondisk.pcx -l z.pcx
fluxengine putfile ibm --180 -f drive:0 -p ondisk.pcx -l z.pcx
```
This will copy the file `z.pcx` onto the disk and call it `ONDISK.PCX`.
@@ -83,7 +86,7 @@ default; for example, Macintosh HFS filesystems are common on 3.5" floppies. You
can do this as follows:
```
fluxengine format ibm1440 -f drive:1 --filesystem.type=MACHFS
fluxengine format ibm --1440 -f drive:1 --filesystem.type=MACHFS
```
Some filesystems won't work on some disks --- don't try this with Amiga FFS, for

View File

@@ -1,10 +1,10 @@
Using the FluxEngine client software with GreaseWeazle hardware
Using the FluxEngine client software with Greaseweazle hardware
===============================================================
The FluxEngine isn't the only project which does this; another one is the
[GreaseWeazle](https://github.com/keirf/Greaseweazle/wiki), a Blue Pill based
[Greaseweazle](https://github.com/keirf/Greaseweazle/wiki), a Blue Pill based
completely open source solution. This requires more work to set up (or you can
buy a prebuilt GreaseWeazle board), but provides completely open source
buy a prebuilt Greaseweazle board), but provides completely open source
hardware which doesn't require the use of the Cypress Windows-based tools that
the FluxEngine does. Luckily, the FluxEngine software supports it almost
out-of-the-box --- just plug it in and nearly everything should work. The
@@ -16,10 +16,10 @@ FluxEngine makes things complicated when you're not using the FluxEngine client
software with a FluxEngine board, but I'm afraid it's too late to change that
now. Sorry.
**If you are using GreaseWeazle-compatible hardware** such as the
**If you are using Greaseweazle-compatible hardware** such as the
[adafruit-floppy](https://github.com/adafruit/Adafruit_Floppy) project, then
FluxEngine will still work; however, as the USB VID/PID won't be that of a real
GreaseWeazle, the the FluxEngine client can't autodetect it. Instead, you'll
Greaseweazle, the the FluxEngine client can't autodetect it. Instead, you'll
need to specify the serial port manually with something like
`--usb.greaseweazle.port=/dev/ttyACM0` or `--usb.greaseweazle.port=COM5`.
@@ -32,7 +32,7 @@ Driver box says `WinUSB` and the right one says `USB Serial (CDC)`. Then press
What works
----------
Supported features with the GreaseWeazle include:
Supported features with the Greaseweazle include:
- simple reading and writing of disks, seeking etc
- erasing disks
@@ -41,8 +41,9 @@ Supported features with the GreaseWeazle include:
`--usb.greaseweazle.bus_type=SHUGART` or `IBMPC`; the default is `IBMPC`)
- Apple 5.25 floppy interfaces (via `--usb.greaseweazle.bus_type=APPLE2`)
Which device types are supported depend on the hardware. Genuine Greaseweazle hardware supports SHUGART and IBMPC.
APPLE2 is only supported with hand wiring and the Adafruit\_Floppy greaseweazle-compatible firmware.
Which device types are supported depend on the hardware. Genuine Greaseweazle
hardware supports SHUGART and IBMPC. APPLE2 is only supported with hand wiring
and the Adafruit\_Floppy greaseweazle-compatible firmware.
What doesn't work
-----------------
@@ -59,12 +60,12 @@ Who to contact
--------------
I want to make it clear that the FluxEngine code is _not_ supported by the
GreaseWeazle team. If you have any problems, please [contact
Greaseweazle team. If you have any problems, please [contact
me](https://github.com/davidgiven/fluxengine/issues/new) and not them.
In addition, the GreaseWeazle release cycle is not synchronised to the
In addition, the Greaseweazle release cycle is not synchronised to the
FluxEngine release cycle, so it's possible you'll have a version of the
GreaseWeazle firmware which is not supported by FluxEngine. Hopefully, it'll
Greaseweazle firmware which is not supported by FluxEngine. Hopefully, it'll
detect this and complain. Again, [file an
issue](https://github.com/davidgiven/fluxengine/issues/new) and I'll look into
it.

BIN
doc/pcb.png Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

BIN
doc/screenshot-details.png Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@@ -80,11 +80,11 @@ as you need an adapter cable or board, but this will allow you to replicate the
FluxEngine hardware on a $2 Blue Pill.
I am _not_ planning on replacing the PSoC5 with a Blue Pill, because someone
already has: [the GreaseWeazle](https://github.com/keirf/Greaseweazle/wiki) is
already has: [the Greaseweazle](https://github.com/keirf/Greaseweazle/wiki) is
a completely open source firmware package which will read and write Supercard
Pro files via a standard Blue Pill or via a prebuilt board. It's supported by
the FluxEngine client software, and you should, mostly, be able to use
GreaseWeazle hardware interchangeably with FluxEngine hardware. See the
Greaseweazle hardware interchangeably with FluxEngine hardware. See the
[dedicated page](greaseweazle.md) for more information.
@@ -95,41 +95,43 @@ For reference, here are the FDC pinouts:
```ditaa
:-E -s 0.75
+--+--+ +--+--+
DISKCHG ---+34+33+ DISKCHG ---+34+33+
+--+--+ +--+--+
SIDE1 ---+32+31+ SIDE1 ---+32+31+
+--+--+ +--+--+
RDATA ---+30+29+ RDATA ---+30+29+
+--+--+ +--+--+
WPT ---+28+27+ WPT ---+28+27+
+--+--+ +--+--+
TRK00 ---+26+25+ TRK00 ---+26+25+
+--+--+ +--+--+
WGATE ---+24+23+ WGATE ---+24+23+
+--+--+ +--+--+
WDATA ---+22+21+ WDATA ---+22+21+
+--+--+ +--+--+
STEP ---+20+19+ STEP ---+20+19+
+--+--+ +--+--+
DIR/SIDE1 ---+18+17+ DIR/SIDE1 ---+18+17+
+--+--+ +--+--+
MOTEB ---+16+15+ MOTEB ---+16+15+
+--+--+ +--+--+
DRVSA ---+14+13+ DS3 ---+14+13+
+--+--+ +--+--+
DRVSB ---+12+11+ DS2 ---+12+11+
+--+--+ +--+--+
MOTEA ---+10+9 + DS1 ---+10+9 +
+--+--+ +--+--+
INDEX ---+8 +7 + INDEX ---+8 +7 +
+--+--+ +--+--+
n/c ---+6 +5 + DS4 ---+6 +5 +
+--+--+ +--+--+
n/c ---+4 +3 + INU ---+4 +3 +
+--+--+ +--+--+
REDWC ---+2 +1 + REDWC ---+2 +1 +
+--+--+ +--+--+
+-- GND +-- GND
| (entire column) | (entire column)
+----+-+--+ +----+-+--+
DISKCHG ---+ 34 + 33 + DISKCHG ---+ 34 + 33 +
+----+----+ +----+----+
SIDE1 ---+ 32 + 31 + SIDE1 ---+ 32 + 31 +
+----+----+ +----+----+
RDATA ---+ 30 + 29 + RDATA ---+ 30 + 29 +
+----+----+ +----+----+
WPT ---+ 28 + 27 + WPT ---+ 28 + 27 +
+----+----+ +----+----+
TRK00 ---+ 26 + 25 + TRK00 ---+ 26 + 25 +
+----+----+ +----+----+
WGATE ---+ 24 + 23 + WGATE ---+ 24 + 23 +
+----+----+ +----+----+
WDATA ---+ 22 + 21 + WDATA ---+ 22 + 21 +
+----+----+ +----+----+
STEP ---+ 20 + 19 + STEP ---+ 20 + 19 +
+----+----+ +----+----+
DIR/SIDE1 ---+ 18 + 17 + DIR/SIDE1 ---+ 18 + 17 +
+----+----+ +----+----+
MOTEB ---+ 16 + 15 + MOTEB ---+ 16 + 15 +
+----+----+ +----+----+
DRVSA ---+ 14 + 13 + DS3 ---+ 14 + 13 +
+----+----+ +----+----+
DRVSB ---+ 12 + 11 + DS2 ---+ 12 + 11 +
+----+----+ +----+----+
MOTEA ---+ 10 + 9 + DS1 ---+ 10 + 9 +
+----+----+ +----+----+
INDEX ---+ 8 + 7 + INDEX ---+ 8 + 7 +
+----+----+ +----+----+
n/c ---+ 6 + 5 + DS4 ---+ 6 + 5 +
+----+----+ +----+----+
n/c ---+ 4 + 3 + INU ---+ 4 + 3 +
+----+----+ +----+----+
REDWC ---+ 2 + 1 + REDWC ---+ 2 + 1 +
+----+----+ +----+----+
PC interface Shugart interface

View File

@@ -11,6 +11,13 @@ moving too quickly for the documentation to keep up. It does respond to
`--help` or `help` depending on context. There are some common properties,
described below.
If possible, try using the GUI, which should provide simplified access for most
common operations.
<div style="text-align: center">
<a href="doc/screenshot-details.png"><img src="doc/screenshot-details.png" style="width:60%" alt="screenshot of the GUI in action"></a>
</div>
### Core concepts
FluxEngine's job is to read magnetic data (called _flux_) off a disk, decode
@@ -43,7 +50,7 @@ file while changing the decoder options, to save disk wear. It's also much faste
### Connecting it up
To use, simply plug your FluxEngine (or [GreaseWeazle](greaseweazle.md)) into
To use, simply plug your FluxEngine (or [Greaseweazle](greaseweazle.md)) into
your computer and run the client. If a single device is plugged in, it will be
automatically detected and used.
@@ -68,10 +75,10 @@ Here are some sample invocations:
```
# Read an PC 1440kB disk, producing a disk image with the default name
# (ibm.img)
$ fluxengine read ibm1440
$ fluxengine read ibm --1440
# Write a PC 1440kB disk to drive 1
$ fluxengine write ibm1440 -i image.img -d drive:1
$ fluxengine write ibm --1440 -i image.img -d drive:1
# Read a Eco1 CP/M disk, making a copy of the flux into a file
$ fluxengine read eco1 --copy-flux-to copy.flux -o eco1.ldbs
@@ -90,30 +97,31 @@ $ cat config.textpb
encoder {
ibm {
trackdata {
emit_iam: false
}
emit_iam: false
}
}
}
$ fluxengine write ibm1440 config.textpb -i image.img
$ fluxengine write ibm --1440 config.textpb -i image.img
```
...or you can specify them on the command line:
```
$ fluxengine write ibm1440 -i image.img --encoder.ibm.trackdata.emit_iam=false
$ fluxengine write ibm --1440 -i image.img --encoder.ibm.trackdata.emit_iam=false
```
Both the above invocations are equivalent. The text files use [Google's
protobuf syntax](https://developers.google.com/protocol-buffers), which is
hierarchical, type-safe, and easy to read.
The `ibm1440` string above is actually a reference to an internal configuration
file containing all the settings for writing PC 1440kB disks. You may specify
as many profile names or textpb files as you wish; they are all merged left to
right. You can see all these settings by doing:
The `ibm` string above is actually a reference to an internal configuration file
containing all the settings for writing PC disks, and the `--1140` refers to a
specific definition inside it. You may specify as many profile names or textpb
files as you wish; they are all merged left to right. You can see all these
settings by doing:
```
$ fluxengine write ibm1440 --config
$ fluxengine write ibm --1440 --config
```
The `--config` option will cause the current configuration to be dumped to the
@@ -131,29 +139,30 @@ different task. Run each one with `--help` to get a full list of
(non-configuration-setting) options; this describes only basic usage of the
more common tools.
- `fluxengine read <profile> -s <flux source> -o <image output>`
- `fluxengine read <profile> <options> -s <flux source> -o <image output>`
Reads flux (possibly from a disk) and decodes it into a file system image.
`<profile>` is a reference to an internal input configuration file
describing the format.
Reads flux (possibly from a disk) and decodes it into a file system image.
`<profile>` is a reference to an internal input configuration file
describing the format. `<options>` may be any combination of options
defined by the profile.
- `fluxengine write <profile> -i <image input> -d <flux destination>`
Reads a filesystem image and encodes it into flux (possibly writing to a
disk). `<profile>` is a reference to an internal output configuration file
describing the format.
Reads a filesystem image and encodes it into flux (possibly writing to a
disk). `<profile>` is a reference to an internal output configuration file
describing the format.
- `fluxengine rawread -s <flux source> -d <flux destination>`
Reads flux (possibly from a disk) and writes it to a flux file without
doing any decoding. You can specify a profile if you want to read a subset
of the disk.
Reads flux (possibly from a disk) and writes it to a flux file without doing
any decoding. You can specify a profile if you want to read a subset of the
disk.
- `fluxengine rawwrite -s <flux source> -d <flux destination>`
Reads flux from a file and writes it (possibly to a disk) without doing any
encoding. You can specify a profile if you want to write a subset of the
disk.
Reads flux from a file and writes it (possibly to a disk) without doing any
encoding. You can specify a profile if you want to write a subset of the
disk.
- `fluxengine merge -s <fluxfile> -s <fluxfile...> -d <fluxfile`
@@ -165,20 +174,20 @@ more common tools.
- `fluxengine inspect -s <flux source> -c <cylinder> -h <head> -B`
Reads flux (possibly from a disk) and does various analyses of it to try
and detect the clock rate, display raw flux information, examine the
underlying data from the FluxEngine board, etc. There are lots of options
but the command above is the most useful.
Reads flux (possibly from a disk) and does various analyses of it to try and
detect the clock rate, display raw flux information, examine the underlying
data from the FluxEngine board, etc. There are lots of options but the
command above is the most useful.
- `fluxengine rpm`
Measures the rotation speed of a drive. For hard-sectored disks, you
probably want to add the name of a read profile to configure the number of
sectors.
Measures the rotation speed of a drive. For hard-sectored disks, you
probably want to add the name of a read profile to configure the number of
sectors.
- `fluxengine seek -c <cylinder>`
Seeks a drive to a particular cylinder.
Seeks a drive to a particular cylinder.
There are other tools; try `fluxengine --help`.
@@ -198,19 +207,19 @@ FluxEngine supports a number of ways to get or put flux. When using the `-s` or
- `drive:<n>`
Read from or write to a specific drive.
Read from or write to a specific drive.
- `<filename.flux>`
Read from or write to a native FluxEngine flux file.
Read from or write to a native FluxEngine flux file.
- `<filename.scp>`
Read from or write to a Supercard Pro `.scp` flux file.
Read from or write to a Supercard Pro `.scp` flux file.
- `<filename.cwf>`
Read from a Catweasel flux file. **Read only.**
Read from a Catweasel flux file. **Read only.**
- `<filename.a2r>`
@@ -218,8 +227,8 @@ FluxEngine supports a number of ways to get or put flux. When using the `-s` or
- `kryoflux:<directory>`
Read from a Kryoflux stream, where `<path>` is the directory containing the
stream files. **Read only.**
Read from a Kryoflux stream, where `<path>` is the directory containing
the stream files. **Read only.**
- `flx:<directory>`
@@ -228,53 +237,54 @@ FluxEngine supports a number of ways to get or put flux. When using the `-s` or
- `erase:`
Read nothing --- writing this to a disk will magnetically erase a track.
**Read only.**
Read nothing --- writing this to a disk will magnetically erase a track.
**Read only.**
- `testpattern:`
Read a test pattern, which can be written to a disk to help diagnosis.
**Read only.**
Read a test pattern, which can be written to a disk to help diagnosis.
**Read only.**
- `au:<directory>`
Write to a series of `.au` files, one file per track, which can be loaded
into an audio editor (such as Audacity) as a simple logic analyser. **Write
only.**
Write to a series of `.au` files, one file per track, which can be loaded
into an audio editor (such as Audacity) as a simple logic analyser.
**Write only.**
- `vcd:<directory>`
Write to a series of `.vcd` files, one file per track, which can be loaded
into a logic analyser (such as Pulseview) for analysis. **Write only.**
Write to a series of `.vcd` files, one file per track, which can be loaded
into a logic analyser (such as Pulseview) for analysis. **Write only.**
### Image sources and destinations
FluxEngine also supports a number of file system image formats. When using the
`-i` or `-o` options (for input and output), you can use any of these strings:
- `<filename.adf>`, `<filename.d81>`, `<filename.img>`, `<filename.st>`, `<filename.xdf>`
- `<filename.adf>`, `<filename.d81>`, `<filename.img>`, `<filename.st>`,
`<filename.xdf>`
Read from or write to a simple headerless image file (all these formats are
the same). This will probably want configuration via the
`input/output.image.img.*` configuration settings to specify all the
parameters.
Read from or write to a simple headerless image file (all these formats are
the same). This will probably want configuration via the
`input/output.image.img.*` configuration settings to specify all the
parameters.
- `<filename.diskcopy>`
Read from or write to a [DiskCopy
4.2](https://en.wikipedia.org/wiki/Disk_Copy) image file, commonly used by
Apple Macintosh emulators.
Read from or write to a [DiskCopy
4.2](https://en.wikipedia.org/wiki/Disk_Copy) image file, commonly used by
Apple Macintosh emulators.
- `<filename.td0>`
Read a [Sydex Teledisk TD0
file](https://web.archive.org/web/20210420224336/http://dunfield.classiccmp.org/img47321/teledisk.htm)
image file. Note that only uncompressed images are supported (so far).
Read a [Sydex Teledisk TD0
file](https://web.archive.org/web/20210420224336/http://dunfield.classiccmp.org/img47321/teledisk.htm)
image file. Note that only uncompressed images are supported (so far).
- `<filename.jv3>`
Read from a JV3 image file, commonly used by TRS-80 emulators. **Read
only.**
Read from a JV3 image file, commonly used by TRS-80 emulators. **Read
only.**
- `<filename.dim>`
@@ -382,38 +392,27 @@ behaviour.
- `--drive.revolutions=X`
When reading, spin the disk X times. X
can be a floating point number. The default is usually 1.2. Some formats
default to 1. Increasing the number will sample more data, and can be
useful on dubious disks to try and get a better read.
When reading, spin the disk X times. X can be a floating point number. The
default is usually 1.2. Some formats default to 1. Increasing the number
will sample more data, and can be useful on dubious disks to try and get a
better read.
- `--drive.sync_with_index=true|false`
Wait for an index pulse
before starting to read the disk. (Ignored for write operations.) By
default FluxEngine doesn't, as it makes reads faster, but when diagnosing
disk problems it's helpful to have all your data start at the same place
each time.
Wait for an index pulse before starting to read the disk. (Ignored for write
operations.) By default FluxEngine doesn't, as it makes reads faster, but
when diagnosing disk problems it's helpful to have all your data start at
the same place each time.
- `--drive.index_source=X`
- `--drive.index_mode=X`
Set the source of index pulses when reading or writing respectively. This
is for use with drives which don't produce index pulse data. `X` can be
`INDEXMODE_DRIVE` to get index pulses from the drive, `INDEXMODE_300` to
fake 300RPM pulses, or `INDEXMODE_360` to fake 360RPM pulses. Note this
has no effect on the _drive_, so it doesn't help with flippy disks, but is
useful for using very old drives with FluxEngine itself. If you use this
option, then any index marks in the sampled flux are, of course, garbage.
- `--flux_source.rescale=X, --flux_sink.rescale=X`
When reading or writing a floppy on a drive that doesn't match the
original drive RPM, the flux periods can be scaled to compensate.
For example, to read or write a PC-98 1.2MB (360rpm) floppy using a 300rpm
floppy drive:
`--flux_source.rescale=1.2 --flux_sink.rescale=1.2`
Set the source of index pulses when reading or writing respectively. This
is for use with drives which don't produce index pulse data. `X` can be
`INDEXMODE_DRIVE` to get index pulses from the drive, `INDEXMODE_300` to
fake 300RPM pulses, or `INDEXMODE_360` to fake 360RPM pulses. Note this
has no effect on the _drive_, so it doesn't help with flippy disks, but is
useful for using very old drives with FluxEngine itself. If you use this
option, then any index marks in the sampled flux are, of course, garbage.
## Visualisation
@@ -439,8 +438,8 @@ wrote to do useful things. These are built alongside FluxEngine.
- `brother120tool`, `brother240tool`
Does things to Brother word processor disks. These are [documented on the
Brother disk format page](disk-brother.md).
Does things to Brother word processor disks. These are [documented on the
Brother disk format page](disk-brother.md).
## The recommended workflow
@@ -466,13 +465,13 @@ $ fluxengine read brother -s brother.flux -o brother.img --decoder.write_csv_to=
Apart from being drastically faster, this avoids touching the (potentially
physically fragile) disk.
If the disk is particularly dodgy, you can force FluxEngine not to retry
failed reads with `--retries=0`. This reduces head movement. **This is not
If the disk is particularly dodgy, you can force FluxEngine not to retry failed
reads with `--decoder.retries=0`. This reduces head movement. **This is not
recommended.** Floppy disks are inherently unreliable, and the occasional bit
error is perfectly normal; FluxEngine will retry and the sector will read
fine next time. If you prevent retries, then not only do you get bad sectors
in the resulting image, but the flux file itself contains the bad read, so
attempting a decode of it will just reproduce the same bad data.
error is perfectly normal; FluxEngine will retry and the sector will read fine
next time. If you prevent retries, then not only do you get bad sectors in the
resulting image, but the flux file itself contains the bad read, so attempting a
decode of it will just reproduce the same bad data.
See also the [troubleshooting page](problems.md) for more information about
reading dubious disks.

View File

@@ -25,7 +25,7 @@ SetCompressor /solid lzma
GreaseWeazle hardware. It also allows manipulation of flux files and disk \
images, so it's useful without any hardware.$\r$\n\
$\r$\n\
This wizard will install WordGrinder on your computer.$\r$\n\
This wizard will install FluxEngine on your computer.$\r$\n\
$\r$\n\
$_CLICK"
@@ -130,7 +130,7 @@ SectionEnd
Section "Desktop Shortcut"
SetOutPath "$DOCUMENTS"
CreateShortCut "$DESKTOP\WordGrinder.lnk" "$INSTDIR\fluxengine-gui.exe" "" "$INSTDIR\fluxengine-gui.exe" 0
CreateShortCut "$DESKTOP\FluxEngine.lnk" "$INSTDIR\fluxengine-gui.exe" "" "$INSTDIR\fluxengine-gui.exe" 0
SectionEnd
;--------------------------------

Some files were not shown because too many files have changed in this diff Show More