depending on who people are using rate.Reset this may or may not be a
breaking change, since Timestamp supports all of the exported methods
from time.Time.
Inspired by the Ruby version [1], UploadReleaseAsset now accepts a
simple path to a file.
The Content-Type is now determined internally using the
mime.TypeByExtension() function.
Note this version is also more flexible. Indeed, the previous version
needed an io.Reader, but actually only 3 kinds of io.Reader were bore.
This is because of http.NewRequest needs to determine the Content-Length
of the body, and it can only determine it for bytes.Buffer, bytes.Reader
or strings.Reader [2]. Note there is an issue raised about this [3].
With a path to a file, we can easily determine the file size, and then
manually add it to the request.
[1]
https://github.com/octokit/octokit.rb/blob/master/lib/octokit/client/releases.rb#L85
[2] http://golang.org/src/pkg/net/http/request.go#L447
[3] https://code.google.com/p/go/issues/detail?id=6738
ktoso kind of shamed me with his extensive resource tests in #49 :), so
I'm finally starting to setup a structure for more general testing of
this type. This change adds the `testJSONMarshal` helper function, and
adds tests for the `User` type. This includes two checks:
- check that an empty resource produces an empty JSON object. This
effectively verifies that all fields include 'omitempty' so that
we're not unintentionally sending additional data over the wire.
- check that a full resource with every field populated produces the
expected JSON. This verifies that the JSON field mappings are
correct. In this case, it might be okay to use resource samples from
the GitHub docs, though I do still prefer very simple field values
since it makes tests easier to read.
When these tests are added for each resource type, we can reduce all of
our other tests to return bare minimal response bodies, since the
resource fields are already being tested at that point.
These tests exercise the code path in all of our API methods where an
error is returned from github.NewRequest(). There are actually only two
ways to trigger an error here:
- provide a URL which can't be parsed
- provide a request body which can't be JSON encoded
In theory, it's also possible that http.NewRequest() could return an
error which would make its way up the stack, but in practice this will
never happen. That's because http.NewRequest(), as currently
implemented, will only ever return an error if it's unable to parse the
provided URL. But since we're simply passing the output of
URL.String(), we know that it will never be unparseable. That leaves
the two options above.
For methods that don't have a request body (which is to say, most), all
we can do is force a URL parse error, which is most easily accomplished
by putting an unescaped "%" in the URL path. This works for methods
that take a string input parameter that is directly added to the path.
If there are no string parameters to the function, or if they're not
part of the URL path, then we're mostly out of luck. In some of those
cases, we can still try to force a JSON encoding error, but otherwise
there will be no way to exercise this code path in those methods.
Of course in this case, the fact that we can't test the error case means
that the error case isn't actually possible. Nevertheless, I don't want
to ignore the error entirely in case some future change in the
implementation of github.NewRequest() or http.NewRequest() might
introduce a new possibility for erroring out.
What we're testing here is not likely to be a common error case, and
adding all of these tests to check for it in each method may be a bit
OCD, but I'd rather have the test coverage.
Like a51d6b4303, this change makes me sad, mainly because it is a
breaking change for all clients, and makes common tasks like reading
data out of structs slightly more work, with no direct benefit. Notably,
developers will need to make sure and check for nil values before trying
to dereference these pointers. Sadly, the change is still necessary, as
is more fully explained in issue #19. We can make the nil pointer
checks a little easier by adding some Get* funcs like goprotobuf does.
I spent a lot of time over the last few weeks exploring this change
(switching fields to pointers) versus the much larger change of using
protocol buffers for all GitHub data types. While the goprotobuf
library is very mature and feature-rich (it's used heavily inside of
Google), it's the wrong tool for this task, since we're not actually
using the proto wire format. While it does address the immediate
concern in #19, it makes way too many other things terribly awkward.
One of the biggest drawbacks of this change is that it will make the
string output from fmt.Printf("%v") next to useless, since all pointer
values are displayed as their memory address. To handle that, I'll be
writing a custom String() function for these structs that is heavily
inspired by goprotobuf and internals from go's fmt package.
when initializing a slice of concrete structs, it's not necessary to
declare the type of each item in the slice, since the compiler knows
what you mean.
The truth is, this change makes me sad. It's a breaking change
for all clients and it adds more complexity to the library surface. In
most cases, clients will simply drop the Response object on the floor
(which is actually all the library itself was doing before this
change... now we're just pushing that off to the client).
Initially, the Response object will be primarily of interest for
functions that return paginated result sets, since the Response.NextPage
field is the only way to know for sure if there are more pages that
should be fetched. And this is really the cleanest way to get at that
data, so in that respect this change isn't so bad.
It's also worth noting that returning the raw Response object makes a
lot more since in a GitHub library than it may in others, given how
GitHub makes liberal (read: proper) use of HTTP request and response
headers. Other APIs, like Google's various APIs for example, tend to
push things like pagination links into the response body. While this is
certainly less of a purist view in terms of REST, it does make the lives
of client developers a lot easier, since then the response body contains
everything you need to know. But whatever; this is how GitHub rolls, so
we'll roll right along with them. (Somewhat ironically we are ignoring
the RESTful links in the GitHub response bodies, since we're actually
calling the API in an RPC style and don't do anything with those links.)
We still don't have an easy way to set arbitrary request headers, but
that's a problem for another day.
Fixes#22
This allows use to provide convenience methods for accessing the
paginations links that are returned in HTTP Link headers. To expose
that, we'll need to return the Response from all API methods, which will
also allow users of the client to do any other kind of inspection of the
response. This adds additional complexity to the API and will be a
breaking change, but seems to be the cleanest way to enable this sort of
thing.
Refs #22
Because the reset date is expressed as a Unix timestamp (rather than an
ISO 8601 timstamp), we can't directly unmarshal the JSON response.
Instead we have to do a little weird indirection to unmarshal it as an
int first, and then convert it into a proper Time object.
Since the library has received external contributions, Google is no
longer the sole copyright holder. Update the boilerplate at the top of
each file to reflect that, and add AUTHORS and CONTRIBUTORS files to
track this information.
This commit adds the core library funcationality and establishes the
general calling style and testing structure. Only a few GitHub API
methods related to organizations and repositories are included in this
first commit, mainly to cement the calling style.