Update doc slugs to improve analytic tracking, move to appropriate folders (#113630)

* Update the slugs to improve google analytics drilldown tracking

* more slug updates

* Fix some formatting issues in building blocks

* update paths

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
This commit is contained in:
Stacey Gammon 2021-10-04 13:36:45 -04:00 committed by GitHub
parent a4eab441c0
commit eddbc8db6f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
27 changed files with 219 additions and 218 deletions

View file

@ -0,0 +1,378 @@
---
id: kibBestPractices
slug: /kibana-dev-docs/contributing/best-practices
title: Best practices
summary: Best practices to follow when building a Kibana plugin.
date: 2021-03-17
tags: ['kibana', 'onboarding', 'dev', 'architecture']
---
## General
First things first, be sure to review our <DocLink id="kibDevPrinciples" text="development principles"/> and check out all the available
platform <DocLink id="kibBuildingBlocks" text="building blocks"/> that can simplify plugin development.
## Developer documentation
### High-level documentation
#### Structure
Refer to [divio documentation](https://documentation.divio.com/) for guidance on where and how to structure our high-level documentation.
<DocLink id="kibDevDocsWelcome" text="Getting started" /> and
<DocLink id="kibPlatformIntro" text="Key concepts" /> sections are both _explanation_ oriented,
<DocLink id="kibDevTutorialDebugging" text="Tutorials" /> covers both _tutorials_ and _How to_, and the <DocLink
id="kibDevDocsApiWelcome"
text="API documentation"
/> section covers _reference_ material.
#### Location
If the information spans multiple plugins, consider adding it to the [dev_docs](https://github.com/elastic/kibana/tree/master/dev_docs) folder. If it is plugin specific, consider adding it inside the plugin folder. Write it in an mdx file if you would like it to show up in our new (beta) documentation system.
<DocCallOut title="internal only">
To add docs into the new docs system, create an `.mdx` file that
contains <DocLink id="docsSyntax" section="frontmatter" text="frontmatter"/>. Read about the syntax <DocLink id="docsSyntax" text="here"/>. An extra step is needed to add a menu item. <DocLink id="docsSetup" text="These instructions" /> will walk you through how to set the docs system
up locally and edit the nav menu.
</DocCallOut>
#### Keep content fresh
A fresh pair of eyes are invaluable. Recruit new hires to read, review and update documentation. Leads should also periodically review documentation to ensure it stays up to date. File issues any time you notice documentation is outdated.
#### Consider your target audience
Documentation in the Kibana Developer Guide is targeted towards developers building Kibana plugins. Keep implementation details about internal plugin code out of these docs.
#### High to low level
When a developer first lands in our docs, think about their journey. Introduce basic concepts before diving into details. The left navigation should be set up so documents on top are higher level than documents near the bottom.
#### Think outside-in
It's easy to forget what it felt like to first write code in Kibana, but do your best to frame these docs "outside-in". Don't use esoteric, internal language unless a definition is documented and linked. The fresh eyes of a new hire can be a great asset.
### API documentation
We automatically generate <DocLink id="kibDevDocsApiWelcome" text="API documentation"/>. The following guidelines will help ensure your <DocLink id="kibPlatformIntro" section="public-plugin-api" text="public APIs" /> are useful.
#### Code comments
Every publicly exposed function, class, interface, type, parameter and property should have a comment using JSDoc style comments.
- Use `@param` tags for every function parameter.
- Use `@returns` tags for return types.
- Use `@throws` when appropriate.
- Use `@beta` or `@deprecated` when appropriate.
- Use `@removeBy {version}` on `@deprecated` APIs. The version should be the last version the API will work in. For example, `@removeBy 7.15` means the API will be removed in 7.16. This lets us avoid mid-release cycle coordination. The API can be removed as soon as the 7.15 branch is cut.
- Use `@internal` to indicate this API item is intended for internal use only, which will also remove it from the docs.
#### Interfaces vs inlined types
Prefer types and interfaces over complex inline objects. For example, prefer:
```ts
/**
* The SearchSpec interface contains settings for creating a new SearchService, like
* username and password.
*/
export interface SearchSpec {
/**
* Stores the username. Duh,
*/
username: string;
/**
* Stores the password. I hope it's encrypted!
*/
password: string;
}
/**
* Retrieve search services
* @param searchSpec Configuration information for initializing the search service.
* @returns the id of the search service
*/
export getSearchService: (searchSpec: SearchSpec) => string;
```
over:
```ts
/**
* Retrieve search services
* @param searchSpec Configuration information for initializing the search service.
* @returns the id of the search service
*/
export getSearchService: (searchSpec: { username: string; password: string }) => string;
```
In the former, there will be a link to the `SearchSpec` interface with documentation for the `username` and `password` properties. In the latter the object will render inline, without comments:
![prefer interfaces documentation](./assets/dev_docs_nested_object.png)
#### Export every type used in a public API
When a publicly exported API items references a private type, this results in a broken link in our docs system. The private type is, by proxy, part of your public API, and as such, should be exported.
Do:
```ts
export interface AnInterface { bar: string };
export type foo: string | AnInterface;
```
Don't:
```ts
interface AnInterface { bar: string };
export type foo: string | AnInterface;
```
#### Avoid “Pick”
`Pick` not only ends up being unhelpful in our documentation system, but it's also of limited help in your IDE. For that reason, avoid `Pick` and other similarly complex types on your public API items. Using these semantics internally is fine.
![pick api documentation](./assets/api_doc_pick.png)
### Example plugins
Running Kibana with `yarn start --run-examples` will include all [example plugins](https://github.com/elastic/kibana/tree/master/examples). These are tested examples of platform services in use. We strongly encourage anyone providing a platform level service or <DocLink id="kibBuildingBlocks" text="building block"/> to include a tutorial that links to a tested example plugin. This is better than relying on copied code snippets, which can quickly get out of date.
## Performance
Build with scalability in mind.
- Consider data with many fields
- Consider data with high cardinality fields
- Consider large data sets, that span a long time range
- Consider slow internet and low bandwidth environments
## Accessibility
Did you know Kibana makes a public statement about our commitment to creating an accessible product for people
with disabilities? [We do](https://www.elastic.co/guide/en/kibana/master/accessibility.html)! It's very important
all of our apps are accessible.
- Learn how [EUI tackles accessibility](https://elastic.github.io/eui/#/guidelines/accessibility)
- If you don't use EUI, follow the same EUI accessibility standards
<DocCallOut title="Internal only">
Elasticians, check out the #accessibility channel to ask questions and receive guidance.
</DocCallOut>
## Localization
Kibana is translated into other languages. Use our i18n utilities to ensure your public facing strings will be translated to ensure all Kibana apps are localized. Read and adhere to our [i18n guidelines](https://github.com/elastic/kibana/blob/master/packages/kbn-i18n/GUIDELINE.md)
<DocCallOut title="Internal only">
Elasticians, check out the #kibana-localization channel to ask questions and receive guidance.
</DocCallOut>
## Styleguide
We use es-lint rules when possible, but please review our [styleguide](https://github.com/elastic/kibana/blob/master/STYLEGUIDE.mdx), which includes recommendations that can't be linted on.
Es-lint overrides on a per-plugin level are discouraged.
## Plugin best practices
Don't export <DocLink id="kibPlatformIntro" section="public-plugin-api" text="public APIs"/> without reason. Make your public APIs as small as possible. You will have to maintain them, and consider backward compatibility when making changes.
Add `README.md` to all your plugins and services and include contact information.
## Re-inventing the wheel
Over-refactoring can be a problem in it's own right, but it's still important to be aware of the existing services that are out there and use them when it makes sense. Check out our <DocLink id="kibBuildingBlocks" text="building blocks"/> to see what high-level services are at your disposal. In addition, our <DocLink id="kibDevDocsApiWelcome" text="API documentation" /> lists additional services.
## Feature development
### Timing
<DocCallOut title="Internal only">
Try not to put your PR in review mode, or merge large changes, right before Feature Freeze. It's inevitably one of the most volatile times for the
Kibana code base, try not to contribute to this volatility. Doing this can:
- increase the likelyhood of conflicts from other features being merged at the last minute
- means your feature has less QA time
- means your feature gets less careful review as reviewers are often swamped at this time
All of the above contributes to more bugs being found in the QA cycle and can cause a delay in the release. Prefer instead to merge
your large change right _after_ feature freeze. If you are worried about missing your initial release version goals, review our
[release train philophy](https://github.com/elastic/dev/blob/master/shared/time-based-releases.md). It's okay!
</DocCallOut>
### Size
When possible, build features with incrementals sets of small and focused PRs, but don't check in unused code, and don't expose any feature on master that you would not be comfortable releasing.
![product_stages](./assets/product_stages.png)
If your feature cannot be broken down into smaller components, or multiple engineers will be contributing, you have a few other options to consider.
**1. Hide your feature behind a feature flag**
Features can be merged behind a flag if you are not ready to make them the default experience, but all code should still be tested, complete and bug free.
A good question to ask yourself is, how will you feel if a customer turns this feature on? Is it usable, even if not up to the
level of something we would market? It should have some level of minimal utility.
Another question to ask yourself is, if this feature gets cancelled, how difficult will it be to remove?
**2. Develop on a feature branch**
This option is useful if you have more than one contributor working on a large feature. The downside is handling code conflicts when rebasing with the main branch.
Consider how you want to handle the PR review. Reviewing each PR going into the feature branch can lighten the review process when merging into the main branch.
**3. Use an example plugin**
If you are building a service for developers, create an [example plugin](https://github.com/elastic/kibana/tree/master/examples) to showcase and test intended usage. This is a great way for reviewers and PMs to play around with a feature through the UI, before the production UI is ready. This can also help developers consuming your services get hands on.
## Embrace the monorepo
Kibana uses a monorepo and our processes and tooling are built around this decision. Utilizing a monorepo allows us to have a consistent peer review process and enforce the same code quality standards across all of Kibana's code. It also eliminates the necessity to have separate versioning strategies and constantly coordinate changes across repos.
When experimenting with code, it's completely fine to create a separate GitHub repo to use for initial development. Once the code has matured to a point where it's ready to be used within Kibana, it should be integrated into the Kibana GitHub repo.
There are some exceptions where a separate repo makes sense. However, they are exceptions to the rule. A separate repo has proven beneficial when there's a dedicated team collaborating on a package which has multiple consumers, for example [EUI](https://github.com/elastic/eui).
It may be tempting to get caught up in the dream of writing the next package which is published to npm and downloaded millions of times a week. Knowing the quality of developers that are working on Kibana, this is a real possibility. However, knowing which packages will see mass adoption is impossible to predict. Instead of jumping directly to writing code in a separate repo and accepting all of the complications that come along with it, prefer keeping code inside the Kibana repo. A [Kibana package](https://github.com/elastic/kibana/tree/master/packages) can be used to publish a package to npm, while still keeping the code inside the Kibana repo. Move code to an external repo only when there is a good reason, for example to enable external contributions.
## Security best practices
When writing code for Kibana, be sure to follow these best practices to avoid common vulnerabilities. Refer to the included Open Web
Application Security Project (OWASP) references to learn more about these types of attacks.
### Cross-site Scripting (XSS)
[_OWASP reference for XSS_](https://owasp.org/www-community/attacks/xss)
XSS is a class of attacks where malicious scripts are injected into vulnerable websites. Kibana defends against this by using the React
framework to safely encode data that is rendered in pages, the EUI framework to [automatically sanitize
links](https://elastic.github.io/eui/#/navigation/link#link-validation), and a restrictive `Content-Security-Policy` header.
**Best practices**
- Check for dangerous functions or assignments that can result in unescaped user input in the browser DOM. Avoid using:
- **React:** [`dangerouslySetInnerHtml`](https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml).
- **Browser DOM:** `Element.innerHTML` and `Element.outerHTML`.
- If using the aforementioned unsafe functions or assignments is absolutely necessary, follow [these XSS prevention
rules](https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html#xss-prevention-rules) to ensure that
user input is not inserted into unsafe locations and that it is escaped properly.
- Use EUI components to build your UI, particularly when rendering `href` links. Otherwise, sanitize user input before rendering links to
ensure that they do not use the `javascript:` protocol.
- Don't use the `eval`, `Function`, and `_.template` functions -- these are restricted by ESLint rules.
- Be careful when using `setTimeout` and `setInterval` in client-side code. If an attacker can manipulate the arguments and pass a string to
one of these, it is evaluated dynamically, which is equivalent to the dangerous `eval` function.
### Cross-Site Request Forgery (CSRF/XSRF)
[_OWASP reference for CSRF_](https://owasp.org/www-community/attacks/csrf)
CSRF is a class of attacks where a user is forced to execute an action on a vulnerable website that they're logged into, usually without
their knowledge. Kibana defends against this by requiring [custom request
headers](https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#use-of-custom-request-headers)
for API endpoints. For more information, see [API Request
Headers](https://www.elastic.co/guide/en/kibana/master/api.html#api-request-headers).
**Best practices**
- Ensure all HTTP routes are registered with the [Kibana HTTP service](https://www.elastic.co/guide/en/kibana/master/http-service.html) to
take advantage of the custom request header security control.
- Note that HTTP GET requests do **not** require the custom request header; any routes that change data should [adhere to the HTTP
specification and use a different method (PUT, POST, etc.)](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods)
### Remote Code Execution (RCE)
[_OWASP reference for Command Injection_](https://owasp.org/www-community/attacks/Command_Injection),
[_OWASP reference for Code Injection_](https://owasp.org/www-community/attacks/Code_Injection)
RCE is a class of attacks where an attacker executes malicious code or commands on a vulnerable server. Kibana defends against this by using
ESLint rules to restrict vulnerable functions, and by hooking into or hardening usage of these in third-party dependencies.
**Best practices**
- Don't use the `eval`, `Function`, and `_.template` functions -- these are restricted by ESLint rules.
- Don't use dynamic `require`.
- Check for usages of templating libraries. Ensure that user-provided input doesn't influence the template and is used only as data for
rendering the template.
- Take extra caution when spawning child processes with any user input or parameters that are user-controlled.
### Prototype Pollution
Prototype Pollution is an attack that is unique to JavaScript environments. Attackers can abuse JavaScript's prototype inheritance to
"pollute" objects in the application, which is often used as a vector for XSS or RCE vulnerabilities. Kibana defends against this by
hardening sensitive functions (such as those exposed by `child_process`), and by requiring validation on all HTTP routes by default.
**Best practices**
- Check for instances of `anObject[a][b] = c` where `a`, `b`, and `c` are controlled by user input. This includes code paths where the
following logical code steps could be performed in separate files by completely different operations, or by recursively using dynamic
operations.
- Validate all user input, including API URL parameters, query parameters, and payloads. Preferably, use a schema that only allows specific
keys and values. At a minimum, implement a deny-list that prevents `__proto__` and `prototype.constructor` from being used within object
keys.
- When calling APIs that spawn new processes or perform code generation from strings, protect against Prototype Pollution by checking if
`Object.hasOwnProperty` has arguments to the APIs that originate from an Object. An example is the defunct Code app's
[`spawnProcess`](https://github.com/elastic/kibana/blob/b49192626a8528af5d888545fb14cd1ce66a72e7/x-pack/legacy/plugins/code/server/lsp/workspace_command.ts#L40-L44)
function.
- Common Node.js offenders: `child_process.spawn`, `child_process.exec`, `eval`, `Function('some string')`, `vm.runInContext(x)`,
`vm.runInNewContext(x)`, `vm.runInThisContext()`
- Common client-side offenders: `eval`, `Function('some string')`, `setTimeout('some string', num)`, `setInterval('some string', num)`
See also:
- [Prototype pollution: The dangerous and underrated vulnerability impacting JavaScript applications |
portswigger.net](https://portswigger.net/daily-swig/prototype-pollution-the-dangerous-and-underrated-vulnerability-impacting-javascript-applications)
- [Prototype pollution attack in NodeJS application | Olivier
Arteau](https://github.com/HoLyVieR/prototype-pollution-nsec18/blob/master/paper/JavaScript_prototype_pollution_attack_in_NodeJS.pdf)
### Server-Side Request Forgery (SSRF)
[_OWASP reference for SSRF_](https://owasp.org/www-community/attacks/Server_Side_Request_Forgery)
SSRF is a class of attacks where a vulnerable server is forced to make an unintended request, usually to an HTTP API. This is often used as
a vector for information disclosure or injection attacks.
**Best practices**
- Ensure that all outbound requests from the Kibana server use hard-coded URLs.
- If user input is used to construct a URL for an outbound request, ensure that an allow-list is used to validate the endpoints and that
user input is escaped properly. Ideally, the allow-list should be set in `kibana.yml`, so only server administrators can change it.
- This is particularly relevant when using `transport.request` with the Elasticsearch client, as no automatic escaping is performed.
- Note that URLs are very hard to validate properly; exact match validation for user input is most preferable, while URL parsing or RegEx
validation should only be used if absolutely necessary.
### Reverse tabnabbing
[_OWASP reference for Reverse Tabnabbing_](https://owasp.org/www-community/attacks/Reverse_Tabnabbing)
Reverse tabnabbing is an attack where a link to a malicious page is used to rewrite a vulnerable parent page. This is often used as a vector
for phishing attacks. Kibana defends against this by using the EUI framework, which automatically adds the `rel` attribute to anchor tags,
buttons, and other vulnerable DOM elements.
**Best practices**
- Use EUI components to build your UI whenever possible. Otherwise, ensure that any DOM elements that have an `href` attribute also have the
`rel="noreferrer noopener"` attribute specified. For more information, refer to the [OWASP HTML5 Security Cheat
Sheet](https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/HTML5_Security_Cheat_Sheet.md#tabnabbing).
- If using a non-EUI markdown renderer, use a custom link renderer for rendered links.
### Information disclosure
Information disclosure is not an attack, but it describes whenever sensitive information is accidentally revealed. This can be configuration
info, stack traces, or other data that the user is not authorized to access. This concern cannot be addressed with a single security
control, but at a high level, Kibana relies on the hapi framework to automatically redact stack traces and detailed error messages in HTTP
5xx response payloads.
**Best practices**
- Look for instances where sensitive information might accidentally be revealed, particularly in error messages, in the UI, and URL
parameters that are exposed to users.
- Make sure that sensitive request data is not forwarded to external resources. For example, copying client request headers and using them
to make an another request could accidentally expose the user's credentials.

View file

@ -0,0 +1,120 @@
---
id: kibDevPrinciples
slug: /kibana-dev-docs/contributing/dev-principles
title: Developer principles
summary: Follow our development principles to help keep our code base stable, maintainable and scalable.
date: 2021-03-04
tags: ['kibana', 'onboarding', 'dev', 'architecture']
---
Over time, the Kibana project has been shaped by certain principles. Like Kibana itself, some of these principles were formed by intention while others were the result of evolution and circumstance, but today all are important for the continued success and maintainability of Kibana.
This document is inspired by and at times reflects points already covered in the [Elasticsearch Development Constitution](https://github.com/elastic/engineering/blob/master/development_constitution.md). When this happens it is either to reframe a principle from the development constitution in terms that are more clearly relevant to Kibana or to emphasize principles that are underrepresented in the Kibana project today.
While the Elasticsearch Development Constitution is written with Elasticsearch in mind, its principles are broadly relevant throughout Elastic's engineering culture and especially for the Kibana project. For that reason, Kibana developers should read and internalize that document as well.
## Don't compromise on quality
Kibana is a mission-critical piece of software, but it's not a service we deploy and manage. When we ship a bug that causes instability, regression, or vulnerability, the issue will significantly impact production deployments for our users, and we will not be able to rollback the deployment to fix it. Even if an issue is severe enough to warrant an emergency patch, it will continue to hurt users in production until we release a fix (at least a week) and users upgrade to it (days to weeks).
It's natural for a developer to want to get an important change merged into the product so it can be released, and it's good for our users to have access to frequent improvements in our releases, but we must not sacrifice the quality of our changes in order to meet these pressures.
Build small features of high quality, and then enhance those features with additional high quality improvements over multiple iterations. Never merge code that is not release-ready.
## Do the right thing even when it's the hard thing
To ensure Kibana remains reliable and relevant in the long term, we will occasionally need to make big changes to existing infrastructure and features where incremental progress is impractical. It takes a lot of hard work and time to get these changes right, and they may break behaviors that developers and end-users have come to rely on, but we must not be afraid to do them if they put the project in a better place.
For changes like this, it's important to clearly define goals, establish scope, gain consensus, and communicate the changes effectively and transparently.
The amount of work associated with a change should give us pause to reflect, but we should not shy away from making big changes to the product when they're the correct solution.
## Automate tests through CI
Kibana is far too large a project for us to reliably find breakages through manual QA efforts. It would take years of developer time to manually test all of the features of Kibana in a single version, and even with a team of QA engineers and hundreds of thousands of lines of automated tests, it still takes months of cumulative developer time to manually QA the releases we have.
It's imperative for the future success and stability of the project as well as for team velocity that all changes have extensive automated testing that give us confidence that ongoing churn on the codebase does not break existing functionality. We must achieve this level of testing without increasing the manual QA burden.
Additionally, since all merged code should be release-ready, it's imperative that automated testing be run through our CI, preferably as blocking status checks on pull requests.
A feature that requires manual QA to ensure it continues to work is worse than not having that feature at all.
## Organize by domain
The Kibana codebase should be organized roughly into software domains, which is to say that modules are defined based on their role within the product rather than by the technical type of code that comprise them. The goal is to help developers find code quickly by placing services in intuitive locations. In practice the two highest technical levels we use to express domains are that of core and plugins, but even within these modules we seek to organize the code by more specific domains.
Within core itself we express domains as services such as HTTP, config, elasticsearch, etc. These things are essential elements to constructing most useful features of Kibana, which is why they belong to the "core" domain.
Each plugin is one top-level domain of the Kibana project, and will generally be constructed by many sub-domains within. In this way, plugins are modeled in the same spirit as core itself.
Be cautious when adding new domains as doing so inherently expands the scope of the parent domain, which in the case of core or plugin boundaries means the scope of Kibana itself.
Code should be organized by domain up until you've reached a level of granularity in a module where it is no longer practical.
## Prefer one way to do things
When building APIs, whether they be programmatic or through HTTP, there are usually multitudes of use cases, and since it's natural as API authors to want to provide a convenient experience for consumers, this often results in a variety of different APIs that achieve similar results. Unfortunately, each API we create causes a burden in maintenance, testing, tooling, and documentation, and this burden is almost always greater than the work needed to use an alternative API.
In order to achieve our goals of creating a consistent product experience as well as an efficient-to-maintain project, we should aim to leverage existing features, APIs, and standards rather than reinventing the wheel. Unless it's part of a broader effort to change the way Kibana does something, it's best to leverage existing APIs and prefer existing conventions where possible.
## Prioritize consistent code within Kibana
Kibana was around before most of its dependencies existed, and we expect Kibana to be around after most of its current dependencies are no longer useful, relevant, or appropriate for the project. We must ensure long term sustainability by building Kibana independently of the popular patterns, tools, and idioms of the day.
As with any codebase that evolves organically over time, Kibana has a fair share of inconsistency in style, patterns, and conventions, but we aim to reverse this trend in favor of a more consistent codebase. To accomplish this, we must write new code in the spirit of conventions established by existing code, or we must seek consensus for introducing new styles, patterns, and conventions to Kibana.
We can leverage the cumulative knowledge of the broader developer community to iteratively improve the quality of our software, but the fundamental building blocks of Kibana are designed to support Kibana's needs as a project rather than the conveniences of popular tools and developer community sentiment.
Whenever possible, automated tools should be used to aid in consistency (e.g. linting, prettier).
## TypeScript over JavaScript
JavaScript is no longer the language of choice for Kibana, and any JavaScript that gets transpiled in the project is considered technical debt. Our goal is near-complete replacement of JavaScript with TypeScript throughout the repo.
Static typing gives us more reliable mechanisms to do changes across the project, whether it be upgrading systemic dependencies, making changes to the file structure, or refactoring important modules that have many downstream dependencies. Static analysis allows for more robust tooling to help identify inter-module dependencies, security vulnerabilities, and orphaned code.
Having one definitively preferred language helps developers navigate the broader project and work more freely across teams.
## Be wary of dependencies
The Kibana project is not just the code we commit to the repo but rather the combined total of all of the source code from our own repo and all of the external dependencies we rely on. When a user encounters a deficiency in Kibana, it matters not whether the root cause is in code we've written or external code we depend on. Additionally, relying on a dependency is a considerable expense in terms of cognitive burden, maintenance overhead, and risk.
Except for highly specialized functionality, dependencies often do more harm in the long term than their short term benefits justify. Always be critical of new external dependencies being added to the project, and frequently re-evaluate the use of existing dependencies.
When the use of an external dependency is necessary, ensure there is sufficient integration testing in Kibana to ensure it continues to operate the way we'd expect when we change the consuming code in Kibana or upgrade the dependency code.
Except in specific cases where widespread consensus was gained and clear ownership is established, third party dependencies should not be exposed directly as features of Kibana, whether it be through the UI, HTTP API, or programmatic interfaces.
## Don't share code prematurely
There are many modules throughout Kibana's codebase where we have generic utilities that seem appropriate for use elsewhere in the codebase, but doing so is creating another public interface which has a cost in terms of maintenance, testing, documentation, and complexity that increases with each usage. Over the long term, shared utilities tend to accumulate additional complexity in order to be flexible enough for the various use cases of its consumers, particularly when they are shared across domain/plugin boundaries.
Just because some code can be reused does not mean that it should be reused. Duplicating code has drawbacks, but the complexity scales linearly, which is easier for us to manage across so many teams in a project this size.
When we have many duplicate implementations of the same code across different modules, then we can more effectively evaluate how similar the functions are in practice to determine whether creating a new public interface for the sake of code sharing is appropriate.
## Don't abstract prematurely
Overuse of abstractions makes it harder to build a mental model about Kibana, and we've already encountered scenarios in Kibana's history where so many abstractions were in place it was difficult to maintain them in practice, and no one person understood how all the pieces worked together.
At the same time, abstractions are necessary in a project like Kibana. We must be cautious about introducing new abstractions and ensure there is sufficient testing and documentation in place when we do create them.
Every situation is different, but in general it's better to have abstractions emerge when multiple different implementations of the same service exist rather than in anticipation of that possibility in the future. It's much easier to introduce a new abstraction when we need it than it is to remove an unnecessary abstraction when we don't. For the same reason, it's important that we only write code that we intend to use instead of code that we think might be useful in the future.
Just because something can be abstracted does not mean that it should be abstracted.
## Error cases are features
When two or more modules interact with one another, and especially when two or more independent systems (i.e. integrations) interface with one another, a possible outcome is an error case. We must build Kibana in such a way that errors behave consistently in documented ways that are verified through automated testing, which means these errors are features of Kibana.
In some scenarios the error cases are already codified by the module or service being consumed, and we simply need to handle those cases explicitly from the consuming side (e.g. HTTP response codes). In other cases, an error might occur as the result of an exception being thrown, in which case the consuming code should have a defined way to handle uncaught exceptions with the least impact to unrelated functionality as possible (e.g. render a generic error message in place of this component, return with a 500 error, etc).
Other than in exceptionally rare cases, errors should never be swallowed, and error messages should usually be descriptive and actionable.
## Document for others
The primary consumers of the code we write, the APIs that we create, and the features we ship are people other than ourselves, which means the people using our tools do not share the cumulative domain knowledge and expertise that crafted them. As authors of these tools, we must share our knowledge in the form of documentation.
Features that we anticipate end users, admins, and plugin developers consuming should be documented through our official docs, but module-level READMEs and code comments are also appropriate.
Documentation is critical part of developing features and code, so an undocumented feature is an incomplete feature.

View file

@ -1,6 +1,6 @@
---
id: kibGitHub
slug: /kibana-dev-docs/github
slug: /kibana-dev-docs/contributing/github
title: How we use Github
summary: Forking, branching, committing and using labels in the Kibana GitHub repo
date: 2021-09-16