An Error Occurred Uploading App Please See the Log for Details
Troubleshooting Deject Functions
This document shows you some of the common bug you might encounter and how to deal with them.
Deployment
The deployment phase is a frequent source of issues. Many of the bug you might run across during deployment are related to roles and permissions. Others have to do with wrong configuration.
User with Viewer role cannot deploy a function
A user who has been assigned the Project Viewer or Deject Functions Viewer function has read-just access to functions and function details. These roles are not immune to deploy new functions.
The fault message
Cloud Panel
You need permissions for this activeness. Required permission(south): cloudfunctions.functions.create
Deject SDK
ERROR: (gcloud.functions.deploy) PERMISSION_DENIED: Permission 'cloudfunctions.functions.sourceCodeSet' denied on resources 'projects/<PROJECT_ID>/locations/<LOCATION>` (or resource may not exist)
The solution
Assign the user a function that has the appropriate access.
User with Project Viewer or Cloud Function role cannot deploy a function
In order to deploy a function, a user who has been assigned the Project Viewer, the Deject Function Developer, or Deject Office Admin role must be assigned an additional role.
The error message
Cloud Console
User does non take the iam.serviceAccounts.actAs permission on <PROJECT_ID>@appspot.gserviceaccount.com required to create function. Yous can ready this by running 'gcloud iam service-accounts add together-iam-policy-bounden <PROJECT_ID>@appspot.gserviceaccount.com --fellow member=user: --role=roles/iam.serviceAccountUser'
Cloud SDK
ERROR: (gcloud.functions.deploy) ResponseError: status=[403], code=[Forbidden], bulletin=[Missing necessary permission iam.serviceAccounts.actAs for <USER> on the service account <PROJECT_ID>@appspot.gserviceaccount.com. Ensure that service account <PROJECT_ID>@appspot.gserviceaccount.com is a member of the project <PROJECT_ID>, and then grant <USER> the office 'roles/iam.serviceAccountUser'. Yous can do that by running 'gcloud iam service-accounts add-iam-policy-binding <PROJECT_ID>@appspot.gserviceaccount.com --member=<USER> --role=roles/iam.serviceAccountUser' In case the fellow member is a service account please use the prefix 'serviceAccount:' instead of 'user:'.]
The solution
Assign the user an additional function, the Service Account User IAM role (roles/iam.serviceAccountUser
), scoped to the Deject Functions runtime service account.
Deployment service account missing the Service Agent role when deploying functions
The Cloud Functions service uses the Cloud Functions Service Amanuensis service account (service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
) when performing administrative deportment on your project. Past default this account is assigned the Cloud Functions cloudfunctions.serviceAgent
role. This role is required for Cloud Pub/Sub, IAM, Cloud Storage and Firebase integrations. If you accept changed the office for this service account, deployment fails.
The error bulletin
Cloud Console
Missing necessary permission resourcemanager.projects.getIamPolicy for serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com on project <PROJECT_ID>. Please grant serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com the roles/cloudfunctions.serviceAgent role. You lot can do that past running 'gcloud projects add together-iam-policy-binding <PROJECT_ID> --member=serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com --role=roles/cloudfunctions.serviceAgent'
Deject SDK
Error: (gcloud.functions.deploy) OperationError: code=7, message=Missing necessary permission resourcemanager.projects.getIamPolicy for serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com on project <PROJECT_ID>. Please grant serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com the roles/cloudfunctions.serviceAgent role. You can practise that past running 'gcloud projects add-iam-policy-binding <PROJECT_ID> --member=serviceAccount:service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com --role=roles/cloudfunctions.serviceAgent'
The solution
Reset this service account to the default office.
Deployment service account missing Pub/Sub permissions when deploying an event-driven office
The Cloud Functions service uses the Cloud Functions Service Agent service account (service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
) when performing administrative actions. By default this account is assigned the Cloud Functions cloudfunctions.serviceAgent
role. To deploy upshot-driven functions, the Deject Functions service must access Cloud Pub/Sub to configure topics and subscriptions. If the role assigned to the service account is changed and the advisable permissions are not otherwise granted, the Deject Functions service cannot access Deject Pub/Sub and the deployment fails.
The error message
Deject Console
Failed to configure trigger PubSub projects/<PROJECT_ID>/topics/<FUNCTION_NAME>
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=Failed to configure trigger PubSub projects/<PROJECT_ID>/topics/<FUNCTION_NAME>
The solution
You can:
-
Reset this service account to the default office.
or
-
Grant the
pubsub.subscriptions.*
andpubsub.topics.*
permissions to your service business relationship manually.
User missing permissions for runtime service account while deploying a function
In environments where multiple functions are accessing different resources, it is a mutual practise to use per-function identities, with named runtime service accounts rather than the default runtime service account (PROJECT_ID@appspot.gserviceaccount.com
).
Nevertheless, to use a non-default runtime service account, the deployer must take the iam.serviceAccounts.actAs
permission on that non-default account. A user who creates a non-default runtime service account is automatically granted this permission, only other deployers must have this permission granted past a user with the correct permissions.
The error message
Cloud SDK
ERROR: (gcloud.functions.deploy) ResponseError: condition=[400], lawmaking=[Bad Request], message=[Invalid role service business relationship requested: <SERVICE_ACCOUNT_NAME@<PROJECT_ID>.iam.gserviceaccount.com]
The solution
Assign the user the roles/iam.serviceAccountUser
role on the non-default <SERVICE_ACCOUNT_NAME>
runtime service business relationship. This role includes the iam.serviceAccounts.actAs
permission.
Runtime service account missing project bucket permissions while deploying a function
Cloud Functions can only exist triggered by events from Cloud Storage buckets in the same Google Cloud Platform project. In addition, the Cloud Functions Service Agent service account (service-<PROJECT_NUMBER>@gcf-admin-robot.iam.gserviceaccount.com
) needs a cloudfunctions.serviceAgent
role on your project.
The error message
Cloud Console
Deployment failure: Bereft permissions to (re)configure a trigger (permission denied for saucepan <BUCKET_ID>). Delight, give owner permissions to the editor role of the saucepan and try again.
Cloud SDK
ERROR: (gcloud.functions.deploy) OperationError: code=7, bulletin=Bereft permissions to (re)configure a trigger (permission denied for saucepan <BUCKET_ID>). Please, give owner permissions to the editor function of the saucepan and try again.
The solution
You lot can:
-
Reset this service business relationship to the default part.
or
-
Grant the runtime service account the
cloudfunctions.serviceAgent
role.or
-
Grant the runtime service account the
storage.buckets.{get, update}
and theresourcemanager.projects.become
permissions.
User with Project Editor role cannot make a function public
To ensure that unauthorized developers cannot modify hallmark settings for function invocations, the user or service that is deploying the function must have the cloudfunctions.functions.setIamPolicy
permission.
The error message
Cloud SDK
ERROR: (gcloud.functions.add-iam-policy-binding) ResponseError: status=[403], code=[Forbidden], message=[Permission 'cloudfunctions.functions.setIamPolicy' denied on resource 'projects/<PROJECT_ID>/locations/<LOCATION>/functions/<FUNCTION_NAME> (or resources may not exist).]
The solution
You can:
-
Assign the deployer either the Project Possessor or the Cloud Functions Admin role, both of which contain the
cloudfunctions.functions.setIamPolicy
permission.or
-
Grant the permission manually by creating a custom role.
Function deployment fails due to Cloud Build not supporting VPC-SC
Cloud Functions uses Cloud Build to build your source code into a runnable container. In order to utilise Deject Functions with VPC Service Controls, you lot must configure an access level for the Cloud Build service account in your service perimeter.
The error bulletin
Cloud Panel
One of the below:
Error in the build surround OR Unable to build your function due to VPC Service Controls. The Deject Build service account associated with this part needs an appropriate access level on the service perimeter. Delight grant access to the Cloud Build service account: '{PROJECT_NUMBER}@cloudbuild.gserviceaccount.com' past post-obit the instructions at https://cloud.google.com/functions/docs/securing/using-vpc-service-controls#grant-build-admission"
Cloud SDK
1 of the below:
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=Mistake in the build surroundings OR Unable to build your office due to VPC Service Controls. The Cloud Build service business relationship associated with this function needs an appropriate access level on the service perimeter. Please grant admission to the Cloud Build service account: '{PROJECT_NUMBER}@cloudbuild.gserviceaccount.com' by following the instructions at https://deject.google.com/functions/docs/securing/using-vpc-service-controls#grant-build-access"
The solution
If your project's Audited Resource logs mention "Request is prohibited by organisation's policy" in the VPC Service Controls section and have a Cloud Storage label, you need to grant the Cloud Build Service Account access to the VPC Service Controls perimeter.
Role deployment fails due to IPv6 addresses not permitted in VPC-SC
Cloud Functions can apply IPv6 addresses for outbound requests to Deject Storage. If you use VPC Service Controls and IPv6 addresses are not permitted in your service perimeter, this can crusade failures with function deployment or execution. In order to use VPC Service Controls with Cloud Functions and IPv6 addresses, you must configure an access level to permit IPv6 addresses in your service perimeter.
The error message
In Audited Resource logs, an entry similar the following:
"protoPayload": { "status": "bulletin": "PERMISSION_DENIED", "details": [ { "@type": "type.googleapis.com/google.rpc.PreconditionFailure", "violations": [ { "type": "VPC_SERVICE_CONTROLS", ... "requestMetadata": { "callerIp": "IPv6_ADDRESS", ... "serviceName": "storage.googleapis.com", "methodName": "google.storage.buckets.become", "metadata": { "@type": "type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata", "violationReason": "NO_MATCHING_ACCESS_LEVEL", ...
The solution
To specifically allow requests from Cloud Functions and not the entire Internet, let the range 2600:1900::/28
to access your VPC-SC perimeter by configuring an admission level for this range.
Function deployment fails due to incorrectly specified entry point
Deject Functions deployment tin can neglect if the entry point to your code, that is, the exported function name, is non specified correctly.
The error bulletin
Cloud Panel
Deployment failure: Office failed on loading user lawmaking. Error message: Error: please examine your function logs to see the error crusade: https://deject.google.com/functions/docs/monitoring/logging#viewing_logs
Cloud SDK
Mistake: (gcloud.functions.deploy) OperationError: lawmaking=three, bulletin=Function failed on loading user code. Error message: Please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs
The solution
Your source code must comprise an entry betoken role that has been correctly specified in your deployment, either via Cloud Console or Cloud SDK.
Function deployment fails when using Resource Location Constraint organization policy
If your organization uses a Resources Location Constraint policy, you lot may run across this error in your logs. It indicates that the deployment pipeline failed to create a multi-regional storage bucket.
The fault message
In Cloud Build logs:
Token exchange failed for projection '<PROJECT_ID>'. Org Policy Violated: '<REGION>' violates constraint 'constraints/gcp.resourceLocations'
In Cloud Storage logs:
<REGION>.artifacts.<PROJECT_ID>.appspot.com` storage bucket could not be created.
The solution
If you are using constraints/gcp.resourceLocations
in your organization policy constraints, you lot should specify the appropriate multi-region location. For instance, if you are deploying in whatsoever of the us
regions, you should use us-locations
.
However, if you crave more fine grained control and want to restrict role deployment to a single region (not multiple regions), create the multi-region bucket start:
- Let the whole multi-region
- Deploy a test part
- After the deployment has succeeded, change the organizational policy back to let only the specific region.
The multi-region storage bucket stays available for that region, and so that subsequent deployments can succeed. If you later make up one's mind to allowlist
a region outside of the 1 where the multi-region storage saucepan was created, yous must repeat the process.
Function deployment fails while executing function's global scope
This error indicates that in that location was a problem with your code. The deployment pipeline finished deploying the function, only failed at the last step - sending a health check to the office. This wellness check is meant to execute a function's global telescopic, which could be throwing an exception, crashing, or timing out. The global scope is where you normally load in libraries and initialize clients.
The error message
In Cloud Logging logs:
"Part failed on loading user lawmaking. This is probable due to a bug in the user code."
The solution
For a more detailed mistake bulletin, expect into your part's build logs, equally well as your part'due south runtime logs. If information technology is unclear why your function failed to execute its global scope, consider temporarily moving the code into the request invocation, using lazy initialization of the global variables. This allows you to add together extra log statements effectually your client libraries, which could be timing out on their instantiation (peculiarly if they are calling other services), or crashing/throwing exceptions altogether.
Build
When you deploy your office'due south source lawmaking to Cloud Functions, that source is stored in a Cloud Storage bucket. Deject Build and then automatically builds your code into a container image and pushes that epitome to Container Registry. Cloud Functions accesses this image when it needs to run the container to execute your role.
Build failed due to missing Container Registry Images
Cloud Functions uses Container Registry to manage images of the functions. Container Registry uses Cloud Storage to shop the layers of the images in buckets named STORAGE-REGION.artifacts.PROJECT-ID.appspot.com
. Using Object Lifecycle Management on these buckets breaks the deployment of the functions as the deployments depend on these images being present.
The fault message
Cloud Panel
Build failed: Build error details non available. Please check the logs at <CLOUD_CONSOLE_LINK> CLOUD_CONSOLE_LINK contains an error like below : failed to become Bone from config file for epitome 'usa.gcr.io/<PROJECT_ID>/gcf/us-central1/<UUID>/worker:latest'"
Deject SDK
Fault: (gcloud.functions.deploy) OperationError: lawmaking=thirteen, message=Build failed: Build fault details non available. Please cheque the logs at <CLOUD_CONSOLE_LINK> CLOUD_CONSOLE_LINK contains an error like below : failed to get Bone from config file for image 'united states of america.gcr.io/<PROJECT_ID>/gcf/us-central1/<UUID>/worker:latest'"
The solution
- Disable Lifecycle Management on the buckets required past Container Registry.
- Delete all the images of affected functions. You can access build logs to notice the paradigm paths. Reference script to bulk delete the images. Note that this does not affect the functions that are currently deployed.
- Redeploy the functions.
Serving
The serving phase tin likewise exist a source of errors.
Serving permission error due to the function being individual
Cloud Functions allows you to declare functions individual
, that is, to restrict admission to end users and service accounts with the advisable permission. By default deployed functions are set as private. This error message indicates that the caller does not have permission to invoke the function.
The fault message
HTTP Error Response code: 403 Forbidden
HTTP Mistake Response trunk: Error: Forbidden Your client does not have permission to become URL /<FUNCTION_NAME>
from this server.
The solution
You can:
-
Let public (unauthenticated) access to all users for the specific function.
or
-
Assign the user the Cloud Functions Invoker Cloud IAM role for all functions.
Serving permission error due to "allow internal traffic only" configuration
Ingress settings restrict whether an HTTP function tin can exist invoked by resources exterior of your Google Cloud project or VPC Service Controls service perimeter. When the "allow internal traffic only" setting for ingress networking is configured, this error message indicates that only requests from VPC networks in the same projection or VPC Service Controls perimeter are allowed.
The error message
HTTP Fault Response code: 403 Forbidden
HTTP Error Response body: Error 403 (Forbidden) 403. That's an error. Access is forbidden. That'south all we know.
The solution
Y'all can:
-
Ensure that the request is coming from your Google Cloud project or VPC Service Controls service perimeter.
or
-
Change the ingress settings to permit all traffic for the role.
Function invocation lacks valid authentication credentials
Invoking a Cloud Functions function that has been fix with restricted access requires an ID token. Access tokens or refresh tokens do non work.
The error message
HTTP Error Response lawmaking: 401 Unauthorized
HTTP Mistake Response body: Your customer does not have permission to the requested URL
The solution
Make sure that your requests include an Authorization: Bearer ID_TOKEN
header, and that the token is an ID token, non an admission or refresh token. If you are generating this token manually with a service account'due south individual key, y'all must exchange the self-signed JWT token for a Google-signed Identity token, following this guide.
Attempt to invoke function using ringlet
redirects to Google login page
If you attempt to invoke a function that does not be, Cloud Functions responds with an HTTP/2 302
redirect which takes you to the Google account login page. This is wrong. Information technology should answer with an HTTP/two 404
error response code. The trouble is existence addressed.
The solution
Make sure yous specify the name of your function correctly. You can always check using gcloud functions call
which returns the correct 404
error for a missing role.
Application crashes and office execution fails
This fault indicates that the process running your part has died. This is usually due to the runtime crashing due to bug in the function code. This may too happen when a deadlock or another condition in your role's code causes the runtime to go unresponsive to incoming requests.
The mistake message
In Cloud Logging logs: "Infrastructure cannot communicate with function. There was likely a crash or deadlock in the user-provided code."
The solution
Different runtimes can crash under different scenarios. To find the root cause, output detailed debug level logs, check your application logic, and test for edge cases.
The Deject Functions Python37 runtime currently has a known limitation on the rate that it can handle logging. If log statements from a Python37 runtime case are written at a sufficiently high rate, it can produce this error. Python runtime versions >= 3.8 do not have this limitation. We encourage users to migrate to a higher version of the Python runtime to avert this issue.
If you are withal uncertain nigh the cause of the error, check out our support page.
Role stops mid-execution, or continues running after your code finishes
Some Cloud Functions runtimes let users to run asynchronous tasks. If your function creates such tasks, information technology must also explicitly wait for these tasks to complete. Failure to do and so may crusade your function to end executing at the wrong fourth dimension.
The mistake behavior
Your function exhibits one of the following behaviors:
- Your role terminates while asynchronous tasks are still running, only earlier the specified timeout flow has elapsed.
- Your function does not end running when these tasks cease, and continues to run until the timeout period has elapsed.
The solution
If your function terminates early, you should make sure all your office's asynchronous tasks accept been completed before doing whatsoever of the following:
- returning a value
- resolving or rejecting a returned
Promise
object (Node.js functions only) - throwing uncaught exceptions and/or errors
- sending an HTTP response
- calling a callback function
If your function fails to end one time all asynchronous tasks accept completed, you lot should verify that your role is correctly signaling Cloud Functions in one case information technology has completed. In particular, make sure that you perform one of the operations listed to a higher place every bit soon as your function has finished its asynchronous tasks.
JavaScript heap out of memory
For Node.js 12+ functions with memory limits greater than 2GiB, users need to configure NODE_OPTIONS
to accept max_old_space_size
and so that the JavaScript heap limit is equivalent to the function'south retention limit.
The error bulletin
Deject Panel
FATAL ERROR: CALL_AND_RETRY_LAST Allotment failed - JavaScript heap out of memory
The solution
Deploy your Node.js 12+ role, with NODE_OPTIONS
configured to have max_old_space_size
set up to your part'southward memory limit. For instance:
gcloud functions deploy envVarMemory \ --runtime nodejs16 \ --set-env-vars NODE_OPTIONS="--max_old_space_size=8192" \ --memory 8Gi \ --trigger-http
Role terminated
Yous may see one of the following mistake messages when the process running your code exited either due to a runtime error or a deliberate get out. In that location is too a small chance that a rare infrastructure error occurred.
The error messages
Function invocation was interrupted. Mistake: function terminated. Recommended action: audit logs for termination reason. Additional troubleshooting information can exist found in Logging.
Request rejected. Fault: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting data can be found in Logging.
Role cannot be initialized. Error: function terminated. Recommended action: inspect logs for termination reason. Boosted troubleshooting information tin exist establish in Logging.
The solution
-
For a background (Pub/Sub triggered) function when an
executionID
is associated with the request that concluded up in mistake, try enabling retry on failure. This allows the retrying of function execution when a retriable exception is raised. For more data for how to apply this option safely, including mitigations for fugitive infinite retry loops and managing retriable/fatal errors differently, see All-time Practices. -
Groundwork activity (anything that happens subsequently your function has terminated) can cause issues, so check your code. Cloud Functions does not guarantee any actions other than those that run during the execution period of the function, and so even if an activity runs in the background, it might be terminated by the cleanup procedure.
-
In cases when at that place is a sudden traffic spike, endeavour spreading the workload over a little more time. Too test your functions locally using the Functions Framework before you deploy to Cloud Functions to ensure that the error is not due to missing or alien dependencies.
Scalability
Scaling issues related to Cloud Functions infrastructure can arise in several circumstances.
The following conditions tin can exist associated with scaling failures.
- A huge sudden increase in traffic.
- A long cold beginning time.
- A long request processing fourth dimension.
- High function mistake rate.
- Reaching the maximum example limit and hence the organisation cannot scale any farther.
- Transient factors attributed to the Cloud Functions service.
In each case Deject Functions might not scale up fast enough to manage the traffic.
The error message
-
The request was aborted considering there was no available case
-
severity=WARNING
( Response code: 429 ) Cloud Functions cannot scale due to themax-instances
limit you set up during configuration. -
severity=ERROR
( Response lawmaking: 500 ) Cloud Functions intrinsically cannot manage the rate of traffic.
-
The solution
- For HTTP trigger-based functions, have the client implement exponential backoff and retries for requests that must not be dropped.
- For background / event-driven functions, Cloud Functions supports
at to the lowest degree in one case commitment
. Even without explicitly enabling retry, the upshot is automatically re-delivered and the role execution will be retried. See Retrying Outcome-Driven Functions for more information. - When the root cause of the issue is a period of heightened transient errors attributed solely to Cloud Functions or if yous need assistance with your effect, please contact support
Logging
Setting up logging to help y'all rail downwards problems can cause problems of its own.
Logs entries have no, or incorrect, log severity levels
Cloud Functions includes elementary runtime logging past default. Logs written to stdout
or stderr
announced automatically in the Deject Panel. But these log entries, by default, contain only uncomplicated cord messages.
The mistake message
No or incorrect severity levels in logs.
The solution
To include log severities, y'all must transport a structured log entry instead.
Handle or log exceptions differently in the event of a crash
Y'all may want to customize how you manage and log crash information.
The solution
Wrap your function is a try/catch
cake to customize handling exceptions and logging stack traces.
Example
import logging import traceback def try_catch_log(wrapped_func): def wrapper(*args, **kwargs): try: response = wrapped_func(*args, **kwargs) except Exception: # Replace new lines with spaces and then equally to prevent several entries which # would trigger several errors. error_message = traceback.format_exc().replace('\due north', ' ') logging.error(error_message) render 'Error'; return response; return wrapper; #Example hello globe office @try_catch_log def python_hello_world(request): request_args = asking.args if request_args and 'name' in request_args: 1 + 'due south' return 'Hello World!'
Logs too large in Node.js ten+, Python 3.8, Go 1.13, and Java eleven
The max size for a regular log entry in these runtimes is 105 KiB.
The solution
Make sure you transport log entries smaller than this limit.
Cloud Functions logs are not appearing in Log Explorer
Some Deject Logging client libraries use an asynchronous process to write log entries. If a function crashes, or otherwise terminates, it is possible that some log entries have not been written even so and may appear later. It is likewise possible that some logs will be lost and cannot be seen in Log Explorer.
The solution
Use the customer library interface to flush buffered log entries before exiting the function or use the library to write log entries synchronously. You tin can likewise synchronously write logs directly to stdout
or stderr
.
Cloud Functions logs are not appearing via Log Router Sink
Log entries are routed to their various destinations using Log Router Sinks.
Included in the settings are Exclusion filters, which define entries that can simply be discarded.
The solution
Make sure no exclusion filter is set up for resource.blazon="cloud_functions"
Database connections
In that location are a number of bug that tin arise when connecting to a database, many associated with exceeding connection limits or timing out. If you run across a Cloud SQL alarm in your logs, for example, "context deadline exceeded", you lot might need to conform your connectedness configuration. Run across the Cloud SQL docs for additional details.
tenchnotiontery45.blogspot.com
Source: https://cloud.google.com/functions/docs/troubleshooting