Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
superjolt authored Feb 6, 2025
2 parents 863a5df + c612258 commit df23edd
Show file tree
Hide file tree
Showing 131 changed files with 7,221 additions and 2,823 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
57 changes: 57 additions & 0 deletions config/kubernetes/default/deployments/webapp.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 2
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
annotations:
# Our internal logs aren't structured so we use logfmt_sloppy to just log stdout and error
# See https://thehub.github.com/epd/engineering/dev-practicals/observability/logging/ for more details
fluentbit.io/parser: logfmt_sloppy
observability.github.com/splunk_index: docs-internal
spec:
dnsPolicy: Default
containers:
- name: webapp
image: docs-internal
resources:
requests:
cpu: 1000m
memory: 4500Mi
limits:
cpu: 8000m
memory: 16Gi
ports:
- name: http
containerPort: 4000
protocol: TCP
envFrom:
- secretRef:
name: vault-secrets
- configMapRef:
name: kube-cluster-metadata
# application-config is created at deploy time from
# configuration set in config/moda/configuration/*/env.yaml
- configMapRef:
name: application-config
# Zero-downtime deploys
# https://thehub.github.com/engineering/products-and-services/internal/moda/feature-documentation/pod-lifecycle/#required-prestop-hook
# https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
lifecycle:
preStop:
exec:
command: ['sleep', '5']
readinessProbe:
initialDelaySeconds: 5
httpGet:
# WARNING: This should be updated to a meaningful endpoint for your application which will return a 200 once the app is fully started.
# See: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
path: /healthcheck
port: http
19 changes: 19 additions & 0 deletions config/kubernetes/default/services/webapp.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
service: webapp
annotations:
moda.github.net/domain-name: 'docs-internal-%environment%.service.%region%.github.net'
# HTTP app reachable inside GitHub's network (employee website)
moda.github.net/load-balancer-type: internal-http
spec:
ports:
- name: http
port: 4000
protocol: TCP
targetPort: http
selector:
app: webapp
type: LoadBalancer
2 changes: 1 addition & 1 deletion config/kubernetes/production/deployments/webapp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ spec:
name: vault-secrets
- configMapRef:
name: kube-cluster-metadata
# application-config is crated at deploy time from
# application-config is created at deploy time from
# configuration set in config/moda/configuration/*/env.yaml
- configMapRef:
name: application-config
Expand Down
9 changes: 9 additions & 0 deletions config/moda/configuration/default/env.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
data:
MODA_APP_NAME: docs-internal
NODE_ENV: production
NODE_OPTIONS: '--max-old-space-size=4096'
PORT: '4000'
ENABLED_LANGUAGES: 'en,zh,es,pt,ru,ja,fr,de,ko'
RATE_LIMIT_MAX: '21'
# Moda uses a non-default port for sending datadog metrics
DD_DOGSTATSD_PORT: '28125'
6 changes: 3 additions & 3 deletions config/moda/configuration/production/env.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
data:
MODA_APP_NAME: docs-internal
# Identifies the service deployment environment as production
# Equivalent to HEAVEN_DEPLOYED_ENV === 'production'
MODA_PROD_SERVICE_ENV: 'true'
NODE_ENV: production
NODE_OPTIONS: '--max-old-space-size=4096'
PORT: '4000'
ENABLED_LANGUAGES: 'en,zh,es,pt,ru,ja,fr,de,ko'
RATE_LIMIT_MAX: '21'
# Moda uses a non-default port for sending datadog metrics
DD_DOGSTATSD_PORT: '28125'
# Identifies the service deployment environment as production
# Equivalent to HEAVEN_DEPLOYED_ENV === 'production'
MODA_PROD_SERVICE_ENV: 'true'
21 changes: 21 additions & 0 deletions config/moda/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,27 @@ environments:
profile: general
region: iad

- name: staging-cedar
require_pipeline: false
notify_still_locked: true # Notify last person to lock this after an hour
cluster_selector:
profile: general
region: iad

- name: staging-pine
require_pipeline: false
notify_still_locked: true # Notify last person to lock this after an hour
cluster_selector:
profile: general
region: iad

- name: staging-spruce
require_pipeline: false
notify_still_locked: true # Notify last person to lock this after an hour
cluster_selector:
profile: general
region: iad

required_builds:
- docs-internal-moda-config-bundle / docs-internal-moda-config-bundle
- docs-internal-docker-image / docs-internal-docker-image
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ puts jwt
### Example: Using Python to generate a JWT

> [!NOTE]
> You must run `pip install PyJWT` to install the `PyJWT` package in order to use this script.
> You must run `pip install PyJWT cryptography` to install the `PyJWT` and the `cryptography` packages in order to use this script.
```python copy
#!/usr/bin/env python3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ topics:
product: '{% data reusables.billing.enhanced-billing-platform-product %}'
---

>[!IMPORTANT] {% ifversion fpt %}If you have a {% data variables.product.prodname_free_user %} plan or a {% data variables.product.prodname_pro %} plan, this article does not apply to you.{% elsif ghec %}If you have not migrated to the enhanced billing platform, this article does not apply to you.{% endif %}
>[!IMPORTANT] {% ifversion fpt %}If you want to know about billing for your personal user account, this article does not apply to you.{% elsif ghec %}If you have not migrated to the enhanced billing platform, this article does not apply to you.{% endif %}
>
> To check if you are on the enhanced billing platform, see [How do I know if I can access the enhanced billing platform?](/billing/using-the-new-billing-platform/about-the-new-billing-platform-for-enterprises#how-do-i-know-if-i-can-access-the-enhanced-billing-platform).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The enhanced billing platform provides better spending control and detailed usag

The products shown in the enhanced billing platform are determined by your {% data variables.product.github %} plan and subscriptions.

### {% data variables.product.prodname_team %}
### Organizations on {% data variables.product.prodname_team %} or {% data variables.product.prodname_free_team %}

* {% data variables.product.prodname_actions %}
* {% data variables.product.prodname_github_codespaces %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ shortTitle: Add licenses to your account
>[!IMPORTANT] If you pay by invoice, you need to contact your account manager in {% data variables.contact.contact_enterprise_sales %} to add licenses to your enterprise account.
{% endif %}

If you have access to the new billing platform, you can add {% ifversion enterprise-licensing-language %}licenses{% else %}seats{% endif %} to your account through the "Licensing" page. To check if you have access, see [AUTOTITLE](/billing/using-the-new-billing-platform/about-the-new-billing-platform-for-enterprises#how-do-i-know-if-i-can-access-the-new-billing-platform).
If you have access to the new billing platform{% ifversion fpt %} with an organization on a {% data variables.product.prodname_team %} plan{% endif %}, you can add {% ifversion enterprise-licensing-language %}licenses{% else %}seats{% endif %} to your account through the "Licensing" page. To check if you have access, see [AUTOTITLE](/billing/using-the-new-billing-platform/about-the-new-billing-platform-for-enterprises#how-do-i-know-if-i-can-access-the-new-billing-platform)

{% ifversion fpt %}
{% data reusables.profile.access_org %}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,18 @@ shortTitle: Get started
If you don't already have access to the enhanced billing platform, you may be able to get started.

{% ifversion fpt %}
* If you are **new** to {% data variables.product.github %}, set up a {% data variables.product.prodname_team %} plan account. See [Team](https://github.com/pricing) on the {% data variables.product.github %} pricing page.
* If you are **new** to {% data variables.product.github %}, create an organization on a {% data variables.product.prodname_free_team %} or {% data variables.product.prodname_team %} plan.
{% endif %}
* If you are **new** to {% data variables.product.prodname_ghe_cloud %}, set up a trial of {% data variables.product.prodname_ghe_cloud %}. See [AUTOTITLE](/admin/overview/setting-up-a-trial-of-github-enterprise-cloud).
{% ifversion ghec %}
* If you have an **existing** enterprise account and pay by **invoice**, contact your account manager in {% data variables.contact.contact_enterprise_sales %} to discuss switching when your contract renews.
* If you have an **existing** enterprise account and pay via **credit card or PayPal**, wait for an in-product prompt to transition.
{% endif %}

{% ifversion fpt %}
For a comparison of plans, see the {% data variables.product.pricing_link %} page.
{% endif %}

## Next steps

* To **learn about billing cycles**, see [AUTOTITLE](/billing/using-the-new-billing-platform/about-the-billing-cycle).
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ redirect_from:

{% data reusables.rai.code-scanning.copilot-autofix-note %}

{% data variables.product.prodname_copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.product.prodname_copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model GPT-4o from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.
{% data variables.product.prodname_copilot_autofix_short %} generates potential fixes that are relevant to the existing source code and translates the description and location of an alert into code changes that may fix the alert. {% data variables.product.prodname_copilot_autofix_short %} uses internal {% data variables.product.prodname_copilot %} APIs interfacing with the large language model GPT 4o from OpenAI, which has sufficient generative capabilities to produce both suggested fixes in code and explanatory text for those fixes.

{% data variables.product.prodname_copilot_autofix_short %} is allowed by default and enabled for every repository using {% data variables.product.prodname_codeql %}, but you can choose to opt out and disable {% data variables.product.prodname_copilot_autofix_short %}. To learn how to disable {% data variables.product.prodname_copilot_autofix_short %} at the enterprise, organization and repository levels, see [AUTOTITLE](/code-security/code-scanning/managing-code-scanning-alerts/disabling-autofix-for-code-scanning).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ jobs:
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d
uses: dependabot/fetch-metadata@d7267f607e9d3fb96fc2fbe83e0af444713e90b7
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
# The following properties are now available:
Expand Down Expand Up @@ -102,7 +102,7 @@ jobs:
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d
uses: dependabot/fetch-metadata@d7267f607e9d3fb96fc2fbe83e0af444713e90b7
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
- name: Add a label for all production dependencies
Expand Down Expand Up @@ -136,7 +136,7 @@ jobs:
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d
uses: dependabot/fetch-metadata@d7267f607e9d3fb96fc2fbe83e0af444713e90b7
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
- name: Approve a PR
Expand Down Expand Up @@ -173,7 +173,7 @@ jobs:
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d
uses: dependabot/fetch-metadata@d7267f607e9d3fb96fc2fbe83e0af444713e90b7
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
- name: Enable auto-merge for Dependabot PRs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ topics:
* {% data variables.product.prodname_copilot_edits_vscode_short %} to make changes across multiple files (**only in {% data variables.product.prodname_vscode %} and {% data variables.product.prodname_vs %}**)
* {% data variables.product.prodname_copilot_chat_short %} in {% data variables.product.prodname_vscode %}, {% data variables.product.prodname_vs %}, JetBrains IDEs, and {% data variables.product.prodname_dotcom_the_website %}
* Block suggestions matching public code
* Access to {% data variables.copilot.copilot_claude_sonnet %} models
* Access to the {% data variables.copilot.copilot_claude_sonnet %} and {% data variables.copilot.copilot_gemini_flash %} models
* Access to {% data variables.product.prodname_copilot_extensions_short %} in {% data variables.product.prodname_vscode %}, {% data variables.product.prodname_vs %}, JetBrains IDEs, {% data variables.product.prodname_dotcom_the_website %}, and {% data variables.product.prodname_mobile %}

## What are the limitations of {% data variables.product.prodname_copilot_free_short %}?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,12 +38,15 @@ You can choose whether your prompts and {% data variables.product.prodname_copil
{% data reusables.user-settings.copilot-settings %}
1. To allow or prevent {% data variables.product.prodname_dotcom %} using your data, select or deselect **Allow {% data variables.product.prodname_dotcom %} to use my code snippets from the code editor for product improvements**.

## Enabling or disabling {% data variables.copilot.copilot_claude_sonnet %}
## Enabling or disabling alternative AI models

You can choose whether to allow use of Anthropic's {% data variables.copilot.copilot_claude_sonnet %} model as an alternative to {% data variables.product.prodname_copilot_short %}'s default model. For more information, see [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
You can choose whether to allow the following AI models to be used as an alternative to {% data variables.product.prodname_copilot_short %}'s default model.

* {% data variables.copilot.copilot_claude_sonnet %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot)
* {% data variables.copilot.copilot_gemini_flash %} - see [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot)

{% data reusables.user-settings.copilot-settings %}
1. To the right of **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, select the dropdown menu, then click **Enabled** or **Disabled**.
1. To the right of the model name, select the dropdown menu, then click **Enabled** or **Disabled**.

## Enabling or disabling web search for {% data variables.product.prodname_copilot_chat %}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ You can configure any of the following policies for your enterprise:
* [{% data variables.product.prodname_copilot_extensions %}](#github-copilot-extensions)
* [Suggestions matching public code](#suggestions-matching-public-code)
* [Give {% data variables.product.prodname_copilot_short %} access to Bing](#give-copilot-access-to-bing)
* [{% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}](#copilot-access-to-claude-35-sonnet)
* [{% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models](#copilot-access-to-the-o1-and-o3-families-of-models)
* [{% data variables.product.prodname_copilot_short %} access to alternative AI models](#copilot-access-to-alternative-ai-models)

### {% data variables.product.prodname_copilot_short %} in {% data variables.product.prodname_dotcom_the_website %}

Expand Down Expand Up @@ -75,25 +74,17 @@ You can chat with {% data variables.product.prodname_copilot %} in your IDE to g
{% data variables.product.prodname_copilot_chat %} can use Bing to provide enhanced responses by searching the internet for information related to a question. Bing search is particularly helpful when discussing new technologies or highly specific subjects.

### {% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}
### {% data variables.product.prodname_copilot_short %} access to alternative AI models

{% data reusables.copilot.claude-sonnet-preview-note %}
> [!NOTE] The following models are currently in {% data variables.release-phases.public_preview %} as AI models for {% data variables.product.prodname_copilot %}, and are subject to change. The [AUTOTITLE](/free-pro-team@latest/site-policy/github-terms/github-pre-release-license-terms) apply to your use of these products.
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, members of your enterprise can choose to use this model rather than the default `GPT 4o` model. See [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
By default, {% data variables.product.prodname_copilot_chat_short %} uses the GPT 4o model. If you grant access to the alternative models, members of your enterprise can choose to use these models rather than the default GPT 4o model. The available alternative models are:

### {% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models

{% data reusables.models.o1-models-preview-note %}

By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 or o3 models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.

The o1 family of models includes the following models:

* `o1`/`o1-preview`: These models are focused on advanced reasoning and solving complex problems, in particular in math and science. They respond more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to each of these models per day.

The o3 family of models includes one model:

* `o3-mini`: This is the next generation of reasoning models, following from `o1` and `o1-mini`. The `o3-mini` model outperforms `o1` on coding benchmarks with response times that are comparable to `o1-mini`, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.
* **{% data variables.copilot.copilot_claude_sonnet %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
* **{% data variables.copilot.copilot_gemini_flash %}**. See [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).
* **OpenAI's o1 and o3 models**
* **o1**: This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT 4o model. Each member of your enterprise can make 10 requests to this model per day.
* **o3-mini**: This is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.

### {% data variables.product.prodname_copilot_short %} Metrics API access

Expand Down
Loading

0 comments on commit df23edd

Please sign in to comment.