Skip to content

This is useful for diagnosing problems with functions that fail to parse correctly.

Usage

count_fields(file, tokenizer, skip = 0, n_max = -1L)

Arguments

file

Either a path to a file, a connection, or literal data (either a single string or a raw vector).

Files ending in .gz, .bz2, .xz, or .zip will be automatically uncompressed. Files starting with http://, https://, ftp://, or ftps:// will be automatically downloaded. Remote gz files can also be automatically downloaded and decompressed.

Literal data is most useful for examples and tests. To be recognised as literal data, the input must be either wrapped with I(), be a string containing at least one new line, or be a vector containing at least one string with a new line.

Using a value of clipboard() will read from the system clipboard.

tokenizer

A tokenizer that specifies how to break the file up into fields, e.g., tokenizer_csv(), tokenizer_fwf()

skip

Number of lines to skip before reading data.

n_max

Optionally, maximum number of rows to count fields for.

Examples

count_fields(readr_example("mtcars.csv"), tokenizer_csv())
#>  [1] 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
#> [24] 11 11 11 11 11 11 11 11 11 11