Commit 6ba89ee6 authored by grothoff's avatar grothoff
Browse files


git-svn-id: 140774ce-b5e7-0310-ab8b-a85725594a96
This diff is collapsed.
Primary developers (0.9.x series):
Christian Grothoff <>
Heikki Lindholm <>
Nils Durner <>
Milan Bouchet-Valat <>
Code contributions also came from:
Adam Warrington [ UPnP ]
Alex Harper [ OS X CPU load ]
Andrew McDonald <> [ SHA-512]
Antti Salonen
Blake Matheny <>
Clytie Siddall <>
David Kuehling <>
Enrico Scholz <>
Eric Haumant
Eric Noack <>
Felix von Leitner [ diet libc snprintf for win32 ]
Gerd Knorr <>
Glenn McGrath <>
Hendrik Pagenhardt <>
Heikki Lindholm <>
Igor Wronsky <>
Ioana Patrascu <>
Jake Dust <>
James Blackwell <>
Jean-Luc Cooke <> [ SHA-512]
Jussi Eloranta <>
Jrgen Appel <>
Kevin Vandersloot <> [original code of gnome-system-monitor]
Krista Bennett Grothoff <>
Kyle McMartin <> [ SHA-512]
Larry Waldo
Ludovic Courts <>
Marko Rih
Michael John Wensley <>
Nathan Evans <>
Paul Ruth <>
Renaldo Ferreira <>
Risto Saarelma
Roman Zippel
Romain Lievin
sheda <>
Simo Viitanen
Tiberius Stef <>
Tomi Tukiainen
Tuomas Toivonen
Tzvetan Horozov <>
Uli Luckas <>
Vasil Dimov <>
Werner Koch <> [original code of libgcrypt]
Translations (webpage, documentation, as far as known):
Chinese : Di Ma
Danish : Jens Palsberg <>
Deutsch : Christian Grothoff <>,
Nils Durner <>
French : Mathieu <>,
Eric Haumant
Milan <>
Japanese : Hiroshi Yamauchi <>
Polish : Adam Welc <>
Romaneste : Bogdan Carbunar <>
Kinyarwanda: Steven Michael Murphy <>
Vietnamese : Phan Vinh Thinh <> and Clytie Siddall <>
Swedish : Daniel Nylander <>
Spanish : Miguel Angel Arruga Vivas <>
Turkish : Nilgn Belma Bugner <>
GNU in Net : Christian Muellner <>
GNU with Net : Christian Muellner <>
AFS Face : Alex Jones <>
new GNU in Net: Nicklas Larsson <>
FreeBSD : Kirill Ponomarew <>
Debian GNU/Linux: Daniel Baumann <> and
Arnaud Kyheng <>
OS X : Jussi Eloranta <>
If you have contributed and are not listed here, please
notify one of the maintainers in order to be added.
This diff is collapsed.
This diff is collapsed.
Naming conventions:
include files:
- _lib: library without need for a process
- _service: library that needs a service process
- _plugin: plugin definition
- _protocol: structs used in network protocol
- exceptions:
* GNUNET_config.h --- generated // FIXME: decapitalize
* platform.h --- first included
* plibc.h --- external library
* gnunet_common.h --- fundamental routines
* gnunet_directories.h --- generated
* gettext.h --- external library
exported symbols:
- must start with "GNUNET_modulename_" and be defined in "modulename.c"
- exceptions: those defined in gnunet_common.h
- must be called "test_module-under-test_case-description.c"
- "case-description" maybe omitted if there is only one test
performance tests:
- must be called "perf_module-under-test_case-description.c"
- "case-description" maybe omitted if there is only one test
src/ directories:
- apps: end-user applications (i.e., gnunet-search)
- connectors: libraries requiring services (i.e., libgnunetstatistics)
- libs: standalone libraries (i.e., libgnunetecrs, etc.)
- plugins: loadable plugins (i.e., TCP transport, MySQL backend)
* transports: udp/tcp/http/dv???
- services: arm-controlled applications (i.e., gnunet-service-statistics)
- util: library for everyone
For each directory in services, there should be one
in connectors and vice-versa.
For each entry in apps, there should be one in libs.
Minimum file-sharing system (in order of dependency):
gnunet-transport (name?)
gnunet-core (name?)
gnunet-statistics (integrate traffic?)
Installation Instructions
Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005,
2006, 2007 Free Software Foundation, Inc.
This file is free documentation; the Free Software Foundation gives
unlimited permission to copy, distribute and modify it.
Basic Installation
Briefly, the shell commands `./configure; make; make install' should
configure, build, and install this package. The following
more-detailed instructions are generic; see the `README' file for
instructions specific to this package.
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions. Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').
It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring. Caching is
disabled by default to prevent problems with accidental use of stale
cache files.
If you need to do unusual things to compile the package, please try
to figure out how `configure' could check whether to do them, and mail
diffs or instructions to the address given in the `README' so they can
be considered for the next release. If you are using the cache, and at
some point `config.cache' contains results you don't want to keep, you
may remove or edit it.
The file `' (or `') is used to create
`configure' by a program called `autoconf'. You need `' if
you want to change it or regenerate `configure' using a newer version
of `autoconf'.
The simplest way to compile this package is:
1. `cd' to the directory containing the package's source code and type
`./configure' to configure the package for your system.
Running `configure' might take a while. While running, it prints
some messages telling which features it is checking for.
2. Type `make' to compile the package.
3. Optionally, type `make check' to run any self-tests that come with
the package.
4. Type `make install' to install the programs and any data files and
5. You can remove the program binaries and object files from the
source code directory by typing `make clean'. To also remove the
files that `configure' created (so you can compile the package for
a different kind of computer), type `make distclean'. There is
also a `make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
6. Often, you can also type `make uninstall' to remove the installed
files again.
Compilers and Options
Some systems require unusual options for compilation or linking that the
`configure' script does not know about. Run `./configure --help' for
details on some of the pertinent environment variables.
You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here
is an example:
./configure CC=c99 CFLAGS=-g LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you can use GNU `make'. `cd' to the
directory where you want the object files and executables to go and run
the `configure' script. `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'.
With a non-GNU `make', it is safer to compile the package for one
architecture at a time in the source code directory. After you have
installed the package for one architecture, use `make distclean' before
reconfiguring for another architecture.
Installation Names
By default, `make install' installs the package's commands under
`/usr/local/bin', include files under `/usr/local/include', etc. You
can specify an installation prefix other than `/usr/local' by giving
`configure' the option `--prefix=PREFIX'.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
pass the option `--exec-prefix=PREFIX' to `configure', the package uses
PREFIX as the prefix for installing programs and libraries.
Documentation and other data files still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like `--bindir=DIR' to specify different values for particular
kinds of files. Run `configure --help' for a list of the directories
you can set and what kinds of files go in them.
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
Optional Features
Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System). The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.
For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.
Specifying the System Type
There may be some features `configure' cannot figure out automatically,
but needs to determine by the type of machine the package will run on.
Usually, assuming the package is built to be run on the _same_
architectures, `configure' can figure that out, but if it prints a
message saying it cannot guess the machine type, give it the
`--build=TYPE' option. TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:
where SYSTEM can have one of these forms:
See the file `config.sub' for the possible values of each field. If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the option `--target=TYPE' to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.
Sharing Defaults
If you want to set default values for `configure' scripts to share, you
can create a site shell script called `' that gives default
values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/' if it exists, then
`PREFIX/etc/' if it exists. Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.
Defining Variables
Variables not defined in a site shell script can be set in the
environment passed to `configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
causes the specified `gcc' to be used as the C compiler (unless it is
overridden in the site shell script).
Unfortunately, this technique does not work for `CONFIG_SHELL' due to
an Autoconf bug. Until the bug is fixed you can use this workaround:
CONFIG_SHELL=/bin/bash /bin/bash ./configure CONFIG_SHELL=/bin/bash
`configure' Invocation
`configure' recognizes the following options to control how it operates.
Print a summary of the options to `configure', and exit.
Print the version of Autoconf used to generate the `configure'
script, and exit.
Enable the cache: use and save the results of the tests in FILE,
traditionally `config.cache'. FILE defaults to `/dev/null' to
disable caching.
Alias for `--cache-file=config.cache'.
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to `/dev/null' (any error
messages will still be shown).
Look for the package's source code in directory DIR. Usually
`configure' can determine that directory automatically.
`configure' also accepts some other, not widely useful, options. Run
`configure --help' for more details.
INCLUDES = -I$(top_srcdir)/src/include
SUBDIRS = contrib src po
config.rpath \
install-sh \
See ChangeLog.
This document is a summary of why we're moving to GNUnet NG and what
this major redesign tries to address.
First of all, the redesign does not (intentionally) change anything
fundamental about the application-level protocols or how files are
encoded and shared. However, it is not protocol-compatible due to
other changes that do not relate to the essence of the application
The redesign tries to address the following major problem groups
describing isssues that apply more or less to all GNUnet versions
prior to 0.9.x:
PROBLEM GROUP 1 (scalability):
* The code was modular, but bugs were not. Memory corruption
in one plugin could cause crashes in others and it was not
always easy to identify the culprit. This approach
fundamentally does not scale (in the sense of GNUnet being
a framework and a GNUnet server running hundreds of
different application protocols -- and the result still
being debuggable, secure and stable).
* The code was heavily multi-threaded resulting in complex
locking operations. GNUnet 0.8.x had over 70 different
mutexes and almost 1000 lines of lock/unlock operations.
It is challenging for even good programmers to program or
maintain good multi-threaded code with this complexity.
The excessive locking essentially prevents GNUnet from
actually doing much in parallel on multicores.
* Despite efforts like Freeway, it was virtually
impossible to contribute code to GNUnet that was not
writen in C/C++.
* Changes to the configuration almost always required restarts
of gnunetd; the existence of change-notifications does not
really change that (how many users are even aware of SIGHUP,
and how few options worked with that -- and at what expense
in code complexity!).
* Valgrinding could only be done for the entire gnunetd
process. Given that gnunetd does quite a bit of
CPU-intensive crypto, this could not be done for a system
under heavy (or even moderate) load.
* Stack overflows with threads, while rare under Linux these
days, result in really nasty and hard-to-find crashes.
* structs of function pointers in service APIs were
needlessly adding complexity, especially since in
most cases there was no polymorphism
* Use multiple, lously-coupled processes and one big select
loop in each (supported by a powerful library to eliminate
code duplication for each process).
* Eliminate all threads, manage the processes with a
master-process (gnunet-arm, for automatic restart manager)
which also ensures that configuration changes trigger the
necessary restarts.
* Use continuations (with timeouts) as a way to unify
cron-jobs and other event-based code (such as waiting
on network IO).
=> Using multiple processes ensures that memory corruption
stays localized.
=> Using multiple processes will make it easy to contribute
services written in other language(s).
=> Individual services can now be subjected to valgrind
=> Process priorities can be used to schedule the CPU better
Note that we can not just use one process with a big
select loop because we have blocking operations (and the
blocking is outside of our control, thanks MySQL,
sqlite, gethostbyaddr, etc.). So in order to perform
reasonably well, we need some construct for parallel
RULE: If your service contains blocking functions, it
MUST be a process by itself.
* Eliminate structs with function pointers for service APIs;
instead, provide a library (still ending in _service.h) API
that transmits the requests nicely to the respective
process (easier to use, no need to "request" service
in the first place; API can cause process to be started/stopped
via ARM if necessary).
PROBLEM GROUP 2 (UTIL-APIs causing bugs):
* The existing logging functions were awkward to use and
their expressive power was never really used for much.
* While we had some rules for naming functions, there
were still plenty of inconsistencies.
* Specification of default values in configuration could
result in inconsistencies between defaults in
config.scm and defaults used by the program; also,
different defaults might have been specified for the
same option in different parts of the program.
* The TIME API did not distinguish between absolute
and relative time, requiring users to know which
type of value some variable contained and to
manually convert properly. Combined with the
possibility of integer overflows this is a major
source of bugs.
* The TIME API for seconds has a theoretical problem
with a 32-bit overflow on some platforms which is
only partially fixed by the old code with some
* Logging was radically simplified.
* Functions are now more conistently named.
* Configuration has no more defaults; instead,
we load a global default configuration file
before the user-specific configuration (which
can be used to override defaults); the global
default configuration file will be generated
from config.scm.
* Time now distinguishes between
struct GNUNET_TIME_Absolute and
struct GNUNET_TIME_Relative. We use structs
so that the compiler won't coerce for us
(forcing the use of specific conversion
functions which have checks for overflows, etc.).
Naturally the need to use these functions makes
the code a bit more verbose, but that's a good
thing given the potential for bugs.
* There is no more TIME API function to do anything
with 32-bit seconds
PROBLEM GROUP 3 (statistics):
* Databases and others needed to store capacity values
similar to what stats was already doing, but
across process lifetimes ("state"-API was a partial
solution for that, but using it was clunky)
* Only gnunetd could use statistics, but other
processes in the GNUnet system might have had
good uses for it as well
* New statistics library and service that offer
an API to inspect and modify statistics
* Statistics are distinguished by service name
in addition to the name of the value
* Statistics can be marked as persistent, in
which case they are written to disk when
the statistics service shuts down.
=> One solution for existing stats uses,
application stats, database stats and
versioning information!
PROBLEM GROUP 4 (Testing):
* The existing structure of the code with modules
stored in places far away from the test code
resulted in tools like lcov not giving good results.
* The codebase had evolved into a complex, deeply
nested hierarchy often with directories that
then only contained a single file. Some of these
files had the same name making it hard to find
the source corresponding to a crash based on
the reported filename/line information.
* Non-trivial portions of the code lacked good testcases,
and it was not always obvious which parts of the code
were not well-tested.
* Code that should be tested together is now
in the same directory.
* The hierarchy is now essentially flat, each
major service having on directory under src/;
naming conventions help to make sure that
files have globally-unique names
* All code added to the new repository must
come with testcases with reasonable coverage.
PROBLEM GROUP 5 (core/transports):
* The new DV service requires session key exchange
between DV-neighbours, but the existing
session key code can not be used to achieve this.
* The core requires certain services
(such as identity, pingpong, fragmentation,
transport, traffic, session) which makes it
meaningless to have these as modules
(especially since there is really only one
way to implement these)
* HELLO's are larger than necessary since we need
one for each transport (and hence often have
to pick a subset of our HELLOs to transmit)
* Fragmentation is done at the core level but only
required for a few transports; future versions of
these transports might want to be aware of fragments
and do things like retransmission
* Autoconfiguration is hard since we have no good
way to detect (and then use securely) our external IP address
* It is currently not possible for multiple transports
between the same pair of peers to be used concurrently
in the same direction(s)
* We're using lots of cron-based jobs to periodically
try (and fail) to build and transmit
* Rewrite core to integrate most of these services
into one "core" service.
* Redesign HELLO to contain the addresses for
all enabled transports in one message (avoiding
having to transmit the public key and signature
many, many times)
* With discovery being part of the transport service,
it is now also possible to "learn" our external
IP address from other peers (we just add plausible
addresses to the list; other peers will discard
those addresses that don't work for them!)
* New DV will consist of a "transport" and a
high-level service (to handle encrypted DV
control- and data-messages).
* Move expiration from one field per HELLO to one
per address
* Require signature in PONG, not in HELLO (and confirm
on address at a time)
* Move fragmentation into helper library linked
against by UDP (and others that might need it)
* Link-to-link advertising of our HELLO is transport
responsibility; global advertising/bootstrap remains
responsibility of higher layers
* Change APIs to be event-based (transports pull for
transmission data instead of core pushing and failing)
* As with gnunetd, the FS-APIs are heavily threaded,
resulting in hard-to-understand code (slightly
better than gnunetd, but not much).
* GTK in particular does not like this, resulting
in complicated code to switch to the GTK event
thread when needed (which may still be causing
problems on Gnome, not sure).
* If GUIs die (or are not properly shutdown), state
of current transactions is lost (FSUI only
saves to disk on shutdown)
SOLUTION (draft, not done yet, details missing...):
* Eliminate threads from FS-APIs
=> Open question: how to best write the APIs to
allow integration with diverse event loops