There are many different ways to do it, with many pros and cons:
1. Writing data as text, and reading it in as text
+ Simple, usually easy to do
+ Just works
+ Easy to edit by hand
+ Generally quick to parse
- Breaks very easily, brittle
- Quickly gets unwieldy
- Large file size
- Very difficult to extend and scale
2. Just write it out as binary data, read it in in the same order
most of the same pros and cons as above, except
+ Small file size
+ The quickest parsing possible
- Difficult to edit by hand
- Can break when used on a different computer (endianness)
3. Dump it out as a simple text format that uses a library (like JSON)
+ Usually very fast
+ Easy to work with
+ Easy to understand
+ Easy to edit by hand
- Lack of schema and flexibility of format can make things painful
- Editing by hand is usually easy, but it's also easy to break (by, for instance, forgetting a comma)
4. Dump it out and read it in as a schema format (like Protocol Buffers or Capn Proto)
+ Incredibly fast
+ Very easy interface
+ Schema enforcement
+ Backwards-compatible for easy scaling
- Not hand-editable
- Can be difficult to figure out what has gone wrong when something goes wrong
5. Use something like SQLite as an application file format, and operate on the file as a database
+ Very fast (usually at least as fast as a binary format)
+ Don't have to load into memory while working
+ Can use SQL to work on the data, including features like JOINs and views
+ Easy to make backwards compatible
+ Easy to edit file using sqlite3 command
+ Transactions
- Schema can be hard to understand
- Can be easy to ruin data if you don't know what you're doing
- If you don't know SQL already, it can be a hell of a learning curve
- Have to either adopt an ORM or shuffle data between database and application manually
6. Some combination of the above, but zipped into an archive.
+ Combined advantages of the above
+ Ability to work with files like a filesystem
- Combined weaknesses of the above
- Much more complexity
- Usually have to have some sort of standard manifest or metadata file to index what is in the archive at hand, as well as possible version information.
Those are the big ones. I don't do any of the first two unless I need a file with only one or two fields total. The last three all have their strengths and weaknesses, so you'd want to choose something for your situation. There are other solutions, but most of them involve just creating something that fits into one of the latter 4 categories (like creating an extensible text or binary format of your own).
And you're wrong about there being some kind of standard. There are a lot of tools that can help you, but as far as "standards" go, everybody does their own thing.
* Libreoffice does 6 (odf is a zipped pile of XMLs)
* GNUCash does 3 or 5, depending on settings (XML or SQLite)
* Microsoft's document format these days (docx) is also a zipped pile of XMLs
* transmission uses 3 (JSON for its config and temp files)
It's just down to what works best for your situation at hand. I'd suggest experimenting with and using them all, so you can have the best judgment when choosing any specific one in the future.