I met this problem today, and fix it with bellowed steps.
1, Check the log file /var/log/mysqld.log
tail -f /var/log/mysqld.log
2017-03-14T07:06:53.374603Z 0 [ERROR] /usr/sbin/mysqld: Can't create/write to file '/var/run/mysqld/mysqld.pid' (Errcode: 2 - No such file or directory)
2017-03-14T07:06:53.374614Z 0 [ERROR] Can't start server: can't create PID file: No such file or directory
The log says that there isn't a file or directory /var/run/mysqld/mysqld.pid
2, Create the directory /var/run/mysqld
mkdir -p /var/run/mysqld/
3, Start the mysqld again service mysqld start
, but still fail, check the log again /var/log/mysqld.log
2017-03-14T07:14:22.967667Z 0 [ERROR] /usr/sbin/mysqld: Can't create/write to file '/var/run/mysqld/mysqld.pid' (Errcode: 13 - Permission denied)
2017-03-14T07:14:22.967678Z 0 [ERROR] Can't start server: can't create PID file: Permission denied
It saids permission denied.
4, Grant the permission to mysql
chown mysql.mysql /var/run/mysqld/
5, Restart the mysqld
# service mysqld restart
Restarting mysqld (via systemctl): [ OK ]
With google-drive-ftp-adapter I have been able to access the My Drive area of Google Drive with the FileZilla FTP client. However, I have not been able to access the Shared with me area.
You can configure which Google account credentials it uses by changing the account property in the configuration.properties file from default to the desired Google account name. See the instructions at http://www.andresoviedo.org/google-drive-ftp-adapter/
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
In case you need to perform privileged tasks like changing permissions of folders you can perform those tasks as a root user and then create a non-privileged user and switch to it:
From <some-base-image:tag>
# Switch to root user
USER root # <--- Usually you won't be needed it - Depends on base image
# Run privileged command
RUN apt install <packages>
RUN apt <privileged command>
# Set user and group
ARG user=appuser
ARG group=appuser
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group}
RUN useradd -u ${uid} -g ${group} -s /bin/sh -m ${user} # <--- the '-m' create a user home directory
# Switch to user
USER ${uid}:${gid}
# Run non-privileged command
RUN apt <non-privileged command>
Create a script similar to this:
#!/bin/sh -e
location=~/`date +%Y%m%d_%H%M%S`.db
mysqldump -u root --password=<your password> database_name > $location
gzip $location
Then you can edit the crontab
of the user that the script is going to run as:
$> crontab -e
And append the entry
01 * * * * ~/script_path.sh
This will make it run on the first minute of every hour every day.
Then you just have to add in your rolls and other functionality and you are good to go.
This is not explicitly mentioned, but based on the following docs, I think it is implied that an app needs to declare and implement a BackupAgent in order for data backup to work, even in the case when allowBackup is set to true (which is the default value).
http://developer.android.com/reference/android/R.attr.html#allowBackup http://developer.android.com/reference/android/app/backup/BackupManager.html http://developer.android.com/guide/topics/data/backup.html
For me, It was a permission issue. I installed SQL server using a local user account and before joining my companies domain. Later on , I tried to restore a database using my domain account which doesn't have the permissions needed to restore SQL server databases. You need to fix the permission for your domain account and give it system admin permission on the SQL server instance you have.
1: Add the following line into the AssemblyInfo class
[assembly: log4net.Config.XmlConfigurator(Watch = true)]
2: Make sure you don't use .Net Framework 4 Client Profile
as Target Framework
(I think this is OK on your side because otherwise it even wouldn't compile)
3: Make sure you log very early in your program. Otherwise, in some scenarios, it will not be initialized properly (read more on log4net FAQ).
So log something during application startup in the Global.asax
public class Global : System.Web.HttpApplication
{
private static readonly log4net.ILog Log = log4net.LogManager.GetLogger(typeof(Global));
protected void Application_Start(object sender, EventArgs e)
{
Log.Info("Startup application.");
}
}
4: Make sure you have permission to create files and folders on the given path (if the folder itself also doesn't exist)
5: The rest of your given information looks ok
The AWS CLI now supports the --query
parameter which takes a JMESPath expressions.
This means you can sum the size values given by list-objects
using sum(Contents[].Size)
and count like length(Contents[])
.
This can be be run using the official AWS CLI as below and was introduced in Feb 2014
aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[].Size), length(Contents[])]"
You might consider Gitblit, an open-source, integrated, pure Java Git server, viewer, and repository manager for small workgroups.
I use the following script for repairing malformed sqlite files:
#!/bin/bash
cat <( sqlite3 "$1" .dump | grep "^ROLLBACK" -v ) <( echo "COMMIT;" ) | sqlite3 "fix_$1"
Most of the time when a sqlite database is malformed it is still possible to make a dump. This dump is basically a lot of SQL statements that rebuild the database.
Some rows might be missing from the dump (probably becasue they are corrupted). If this is the case the INSERT statements of the missing rows will be replaced with some comments and the script will end with a ROLLBACK TRANSACTION.
So what we do here is we make the dump (malformed rows are excluded) and we replace the ROLLBACK with a COMMIT so that the entire dump script will be committed in stead of rolled back.
This method saved my life a couple of 100 times already \o/
I solved the same problem with the following 3 steps:
In PowerShell, I like to use:
robocopy src dest | Out-Null
It avoids having to remember all the command line switches.
There's an ELSE in the DOS batch language? Back in the days when I did more of this kinda thing, there wasn't.
If my theory is correct and your ELSE is being ignored, you may be better off doing
IF NOT EXIST file GOTO label
...which will also save you a line of code (the one right after your IF).
Second, I vaguely remember some kind of bug with testing for the existence of directories. Life would be easier if you could test for the existence of a file in that directory. If there's no file you can be sure of, something to try (this used to work up to Win95, IIRC) would be to append the device file name NUL
to your directory name, e.g.
IF NOT EXIST C:\dir\NUL GOTO ...
Sorry, but it might even be more complicated, involving pbkdf2, or even a variation of it. Listen to the WWDC 2010 session #209, which mainly talks about the security measures in iOS 4, but also mentions briefly the separate encryption of backups and how they're related.
You can be pretty sure that without knowing the password, there's no way you can decrypt it, even by brute force.
Let's just assume you want to try to enable people who KNOW the password to get to the data of their backups.
I fear there's no way around looking at the actual code in iTunes in order to figure out which algos are employed.
Back in the Newton days, I had to decrypt data from a program and was able to call its decryption function directly (knowing the password, of course) without the need to even undersand its algorithm. It's not that easy anymore, unfortunately.
I'm sure there are skilled people around who could reverse engineer that iTunes code - you just have to get them interested.
In theory, Apple's algos should be designed in a way that makes the data still safe (i.e. practically unbreakable by brute force methods) to any attacker knowing the exact encryption method. And in WWDC session 209 they went pretty deep into details about what they do to accomplish this. Maybe you can actually get answers directly from Apple's security team if you tell them your good intentions. After all, even they should know that security by obfuscation is not really efficient. Try their security mailing list. Even if they do not repond, maybe someone else silently on the list will respond with some help.
Good luck!
Use something like LiquiBase this lets you keep revision control of your Liquibase files. you can tag changes for production only, and have lb keep your DB up to date for either production or development, (or whatever scheme you want).
Welcome to the fickle world of SQL Server log management.
SOMETHING is wrong, though I don't think anyone will be able to tell you more than that without some additional information. For example, has this database ever been used for Transactional SQL Server replication? This can cause issues like this if a transaction hasn't been replicated to a subscriber.
In the interim, this should at least allow you to kill the log file:
You should now be able to shrink the files (if performing the backup didn't do that for you).
Good luck!
Try the following commands at first (re-run again if needed):
$ git fsck --full
$ git gc
$ git gc --prune=today
$ git fetch --all
$ git pull --rebase
And then you you still have the problems, try can:
remove all the corrupt objects, e.g.
fatal: loose object 91c5...51e5 (stored in .git/objects/06/91c5...51e5) is corrupt
$ rm -v .git/objects/06/91c5...51e5
remove all the empty objects, e.g.
error: object file .git/objects/06/91c5...51e5 is empty
$ find .git/objects/ -size 0 -exec rm -vf "{}" \;
check a "broken link" message by:
git ls-tree 2d9263c6d23595e7cb2a21e5ebbb53655278dff8
This will tells you what file the corrupt blob came from!
to recover file, you might be really lucky, and it may be the version that you already have checked out in your working tree:
git hash-object -w my-magic-file
again, and if it outputs the missing SHA1 (4b945..) you're now all done!
assuming that it was some older version that was broken, the easiest way to do it is to do:
git log --raw --all --full-history -- subdirectory/my-magic-file
and that will show you the whole log for that file (please realize that the tree you had may not be the top-level tree, so you need to figure out which subdirectory it was in on your own), then you can now recreate the missing object with hash-object again.
to get a list of all refs with missing commits, trees or blobs:
$ git for-each-ref --format='%(refname)' | while read ref; do git rev-list --objects $ref >/dev/null || echo "in $ref"; done
It may not be possible to remove some of those refs using the regular branch -d or tag -d commands, since they will die if git notices the corruption. So use the plumbing command git update-ref -d $ref instead. Note that in case of local branches, this command may leave stale branch configuration behind in .git/config. It can be deleted manually (look for the [branch "$ref"] section).
After all refs are clean, there may still be broken commits in the reflog. You can clear all reflogs using git reflog expire --expire=now --all. If you do not want to lose all of your reflogs, you can search the individual refs for broken reflogs:
$ (echo HEAD; git for-each-ref --format='%(refname)') | while read ref; do git rev-list -g --objects $ref >/dev/null || echo "in $ref"; done
(Note the added -g option to git rev-list.) Then, use git reflog expire --expire=now $ref on each of those. When all broken refs and reflogs are gone, run git fsck --full in order to check that the repository is clean. Dangling objects are Ok.
Below you can find advanced usage of commands which potentially can cause lost of your data in your git repository if not used wisely, so make a backup before you accidentally do further damages to your git. Try on your own risk if you know what you're doing.
To pull the current branch on top of the upstream branch after fetching:
$ git pull --rebase
You also may try to checkout new branch and delete the old one:
$ git checkout -b new_master origin/master
To find the corrupted object in git for removal, try the following command:
while [ true ]; do f=`git fsck --full 2>&1|awk '{print $3}'|sed -r 's/(^..)(.*)/objects\/\1\/\2/'`; if [ ! -f "$f" ]; then break; fi; echo delete $f; rm -f "$f"; done
For OSX, use sed -E
instead of sed -r
.
Other idea is to unpack all objects from pack files to regenerate all objects inside .git/objects, so try to run the following commands within your repository:
$ cp -fr .git/objects/pack .git/objects/pack.bak
$ for i in .git/objects/pack.bak/*.pack; do git unpack-objects -r < $i; done
$ rm -frv .git/objects/pack.bak
If above doesn't help, you may try to rsync or copy the git objects from another repo, e.g.
$ rsync -varu git_server:/path/to/git/.git local_git_repo/
$ rsync -varu /local/path/to/other-working/git/.git local_git_repo/
$ cp -frv ../other_repo/.git/objects .git/objects
To fix the broken branch when trying to checkout as follows:
$ git checkout -f master
fatal: unable to read tree 5ace24d474a9535ddd5e6a6c6a1ef480aecf2625
Try to remove it and checkout from upstream again:
$ git branch -D master
$ git checkout -b master github/master
In case if git get you into detached state, checkout the master
and merge into it the detached branch.
Another idea is to rebase the existing master recursively:
$ git reset HEAD --hard
$ git rebase -s recursive -X theirs origin/master
See also:
You can create a backup device in server object, let us say
BDTEST
and then create a batch file contain following command
sqlcmd -S 192.168.1.25 -E -Q "BACKUP DATABASE dbtest TO BDTEST"
let us say with name
backup.bat
then you can call
backup.bat
in task scheduler according to your convenience
There isn't a "pure python" way to do this because some other process would have to launch python in order to run your solution. Every platform will have one or twenty different ways to launch processes and monitor their progress. On unix platforms, cron is the old standard. On Mac OS X there is also launchd, which combines cron-like launching with watchdog functionality that can keep your process alive if that's what you want. Once python is running, then you can use the sched module to schedule tasks.
Schedule the following to backup all Databases:
Use Master
Declare @ToExecute VarChar(8000)
Select @ToExecute = Coalesce(@ToExecute + 'Backup Database ' + [Name] + ' To Disk = ''D:\Backups\Databases\' + [Name] + '.bak'' With Format;' + char(13),'')
From
Master..Sysdatabases
Where
[Name] Not In ('tempdb')
and databasepropertyex ([Name],'Status') = 'online'
Execute(@ToExecute)
There are also more details on my blog: how to Automate SQL Server Express Backups.
There has been a lot of discussion about the database model itself, but we also keep the required data in .SQL files.
For example, in order to be useful your application might need this in the install:
INSERT INTO Currency (CurrencyCode, CurrencyName)
VALUES ('AUD', 'Australian Dollars');
INSERT INTO Currency (CurrencyCode, CurrencyName)
VALUES ('USD', 'US Dollars');
We would have a file called currency.sql
under subversion. As a manual step in the build process, we compare the previous currency.sql to the latest one and write an upgrade script.
You can list all unreachable commits by writing this command in terminal -
git fsck --unreachable
Check unreachable commit hash -
git show hash
Finally apply if you find the stashed item -
git stash apply hash
Here are the three web pages on which we found the answer. The most difficult part was setting up static ports for SQLEXPRESS.
Provisioning a SQL Server Virtual Machine on Windows Azure. These initial instructions provided 25% of the answer.
How to Troubleshoot Connecting to the SQL Server Database Engine. Reading this carefully provided another 50% of the answer.
How to configure SQL server to listen on different ports on different IP addresses?. This enabled setting up static ports for named instances (eg SQLEXPRESS.) It took us the final 25% of the way to the answer.
If you want exactly 40 spaces before the string then you should just do:
printf(" %s\n", myStr );
If that is too dirty, you can do (but it will be slower than manually typing the 40 spaces):
printf("%40s%s", "", myStr );
If you want the string to be lined up at column 40 (that is, have up to 39 spaces proceeding it such that the right most character is in column 40) then do this:
printf("%40s", myStr);
You can also put "up to" 40 spaces AfTER the string by doing:
printf("%-40s", myStr);
Some great solutions have already been posted. When I encountered this problem, I wanted to go both ways: convert an enum into a description, and convert a string matching a description into an enum.
I have two variants, slow and fast. Both convert from enum to string and string to enum. My problem is that I have enums like this, where some elements need attributes and some don't. I don't want to put attributes on elements that don't need them. I have about a hundred of these total currently:
public enum POS
{
CC, // Coordinating conjunction
CD, // Cardinal Number
DT, // Determiner
EX, // Existential there
FW, // Foreign Word
IN, // Preposision or subordinating conjunction
JJ, // Adjective
[System.ComponentModel.Description("WP$")]
WPDollar, //$ Possessive wh-pronoun
WRB, // Wh-adverb
[System.ComponentModel.Description("#")]
Hash,
[System.ComponentModel.Description("$")]
Dollar,
[System.ComponentModel.Description("''")]
DoubleTick,
[System.ComponentModel.Description("(")]
LeftParenth,
[System.ComponentModel.Description(")")]
RightParenth,
[System.ComponentModel.Description(",")]
Comma,
[System.ComponentModel.Description(".")]
Period,
[System.ComponentModel.Description(":")]
Colon,
[System.ComponentModel.Description("``")]
DoubleBackTick,
};
The first method for dealing with this is slow, and is based on suggestions I saw here and around the net. It's slow because we are reflecting for every conversion:
using System;
using System.Collections.Generic;
namespace CustomExtensions
{
/// <summary>
/// uses extension methods to convert enums with hypens in their names to underscore and other variants
public static class EnumExtensions
{
/// <summary>
/// Gets the description string, if available. Otherwise returns the name of the enum field
/// LthWrapper.POS.Dollar.GetString() yields "$", an impossible control character for enums
/// </summary>
/// <param name="value"></param>
/// <returns></returns>
public static string GetStringSlow(this Enum value)
{
Type type = value.GetType();
string name = Enum.GetName(type, value);
if (name != null)
{
System.Reflection.FieldInfo field = type.GetField(name);
if (field != null)
{
System.ComponentModel.DescriptionAttribute attr =
Attribute.GetCustomAttribute(field,
typeof(System.ComponentModel.DescriptionAttribute)) as System.ComponentModel.DescriptionAttribute;
if (attr != null)
{
//return the description if we have it
name = attr.Description;
}
}
}
return name;
}
/// <summary>
/// Converts a string to an enum field using the string first; if that fails, tries to find a description
/// attribute that matches.
/// "$".ToEnum<LthWrapper.POS>() yields POS.Dollar
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="value"></param>
/// <returns></returns>
public static T ToEnumSlow<T>(this string value) //, T defaultValue)
{
T theEnum = default(T);
Type enumType = typeof(T);
//check and see if the value is a non attribute value
try
{
theEnum = (T)Enum.Parse(enumType, value);
}
catch (System.ArgumentException e)
{
bool found = false;
foreach (T enumValue in Enum.GetValues(enumType))
{
System.Reflection.FieldInfo field = enumType.GetField(enumValue.ToString());
System.ComponentModel.DescriptionAttribute attr =
Attribute.GetCustomAttribute(field,
typeof(System.ComponentModel.DescriptionAttribute)) as System.ComponentModel.DescriptionAttribute;
if (attr != null && attr.Description.Equals(value))
{
theEnum = enumValue;
found = true;
break;
}
}
if( !found )
throw new ArgumentException("Cannot convert " + value + " to " + enumType.ToString());
}
return theEnum;
}
}
}
The problem with this is that you're doing reflection every time. I haven't measured the performance hit from doing so, but it seems alarming. Worse we are computing these expensive conversions repeatedly, without caching them.
Instead we can use a static constructor to populate some dictionaries with this conversion information, then just look up this information when needed. Apparently static classes (required for extension methods) can have constructors and fields :)
using System;
using System.Collections.Generic;
namespace CustomExtensions
{
/// <summary>
/// uses extension methods to convert enums with hypens in their names to underscore and other variants
/// I'm not sure this is a good idea. While it makes that section of the code much much nicer to maintain, it
/// also incurs a performance hit via reflection. To circumvent this, I've added a dictionary so all the lookup can be done once at
/// load time. It requires that all enums involved in this extension are in this assembly.
/// </summary>
public static class EnumExtensions
{
//To avoid collisions, every Enum type has its own hash table
private static readonly Dictionary<Type, Dictionary<object,string>> enumToStringDictionary = new Dictionary<Type,Dictionary<object,string>>();
private static readonly Dictionary<Type, Dictionary<string, object>> stringToEnumDictionary = new Dictionary<Type, Dictionary<string, object>>();
static EnumExtensions()
{
//let's collect the enums we care about
List<Type> enumTypeList = new List<Type>();
//probe this assembly for all enums
System.Reflection.Assembly assembly = System.Reflection.Assembly.GetExecutingAssembly();
Type[] exportedTypes = assembly.GetExportedTypes();
foreach (Type type in exportedTypes)
{
if (type.IsEnum)
enumTypeList.Add(type);
}
//for each enum in our list, populate the appropriate dictionaries
foreach (Type type in enumTypeList)
{
//add dictionaries for this type
EnumExtensions.enumToStringDictionary.Add(type, new Dictionary<object,string>() );
EnumExtensions.stringToEnumDictionary.Add(type, new Dictionary<string,object>() );
Array values = Enum.GetValues(type);
//its ok to manipulate 'value' as object, since when we convert we're given the type to cast to
foreach (object value in values)
{
System.Reflection.FieldInfo fieldInfo = type.GetField(value.ToString());
//check for an attribute
System.ComponentModel.DescriptionAttribute attribute =
Attribute.GetCustomAttribute(fieldInfo,
typeof(System.ComponentModel.DescriptionAttribute)) as System.ComponentModel.DescriptionAttribute;
//populate our dictionaries
if (attribute != null)
{
EnumExtensions.enumToStringDictionary[type].Add(value, attribute.Description);
EnumExtensions.stringToEnumDictionary[type].Add(attribute.Description, value);
}
else
{
EnumExtensions.enumToStringDictionary[type].Add(value, value.ToString());
EnumExtensions.stringToEnumDictionary[type].Add(value.ToString(), value);
}
}
}
}
public static string GetString(this Enum value)
{
Type type = value.GetType();
string aString = EnumExtensions.enumToStringDictionary[type][value];
return aString;
}
public static T ToEnum<T>(this string value)
{
Type type = typeof(T);
T theEnum = (T)EnumExtensions.stringToEnumDictionary[type][value];
return theEnum;
}
}
}
Look how tight the conversion methods are now. The only flaw I can think of is that this requires all the converted enums to be in the current assembly. Also, I only bother with exported enums, but you could change that if you wish.
This is how to call the methods
string x = LthWrapper.POS.Dollar.GetString();
LthWrapper.POS y = "PRP$".ToEnum<LthWrapper.POS>();
If you want to create a file and write and append data to it many times, then use the below code, it will create file if not exits and will append data if it exists.
SimpleDateFormat formatter = new SimpleDateFormat("yyyy_MM_dd");
Date now = new Date();
String fileName = formatter.format(now) + ".txt";//like 2016_01_12.txt
try
{
File root = new File(Environment.getExternalStorageDirectory()+File.separator+"Music_Folder", "Report Files");
//File root = new File(Environment.getExternalStorageDirectory(), "Notes");
if (!root.exists())
{
root.mkdirs();
}
File gpxfile = new File(root, fileName);
FileWriter writer = new FileWriter(gpxfile,true);
writer.append(sBody+"\n\n");
writer.flush();
writer.close();
Toast.makeText(this, "Data has been written to Report File", Toast.LENGTH_SHORT).show();
}
catch(IOException e)
{
e.printStackTrace();
}
I had the same problem :) Verify the "Source code" folder on the "Solution Explorer", if it doesn't contain any "source code" file then :
Right click on "Source code" > Add > Existing Item > Choose the file You want to build and run.
Good luck ;)
make these changes in your div's style
z-index:100;
some higher value makes sure that this element is above allposition:fixed;
this makes sure that even if scrolling is done, div lies on top and always visible
Yes VBA is the way to go.
But, if you don't need to have a cell with formula that auto-counts/updates the number of cells with a particular colour, an alternative is simply to use the 'Find and Replace' function and format the cell to have the appropriate colour fill.
Hitting 'Find All' will give you the total number of cells found at the bottom left of the dialogue box.
This becomes especially useful if your search range is massive. The VBA script will be very slow but the 'Find and Replace' function will still be very quick.
There are no buts about it! Who want to grasp the subject must do the following ones,
- Peruse The Lost Art of Structure Packing written by Eric S. Raymond
- Glance at Eric's code example
- Last but not least, don't forget the following rule about padding that a struct is aligned to the largest type’s alignment requirements.
update test1 t1, test2 t2
set t2.deptid = t1.deptid
where t2.employeeid = t1.employeeid
you can not use from keyword for the mysql
How about:
SimpleDateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy");
System.out.println(dateFormat.format(dateFormat.parse("31/05/2011")));
> 31/05/2011
On Linux, you can read from /proc/sys/kernel/random/uuid
:
package main
import "io/ioutil"
import "fmt"
func main() {
u, _ := ioutil.ReadFile("/proc/sys/kernel/random/uuid")
fmt.Println(string(u))
}
No external dependencies!
$ go run uuid.go
3ee995e3-0c96-4e30-ac1e-f7f04fd03e44
Yes, this sort of thing is possible without Flash:
However, only very modern browsers supports HTML5 videos, and this should be your consideration when deploying in HTML 5, and you should provide a fallback (probably Flash or just omit the transparency).
I know this is an old question but here is the solution I found to a very similar coding issue in my own code:
var response = ApiDB.Persons.Include(y => y.JobTitle).Include(b => b.Discipline).Include(b => b.Team).Include(b => b.Site).OrderBy(d => d.DisplayName).ToArray();
foreach (var person in response)
{
person.JobTitle = new JobTitle()
{
JobTitle_ID = person.JobTitle.JobTitle_ID,
JobTitleName = person.JobTitle.JobTitleName,
PatientInteraction = person.JobTitle.PatientInteraction,
Active = person.JobTitle.Active,
IsClinical = person.JobTitle.IsClinical
};
}
Since the person object contains everything from the person table and the job title object contains a list of persons with that job title, the database kept self referencing. I thought disabling proxy creation and lazy loading would fix this but unfortunately it didn't.
For the that aren't able to do that, try the solution above. Explicitly creating a new object for each object that self references, but leave out the list of objects or object that goes back to the previous entity will fix it since disabling lazy loading does not appear to work for me.
create view sampleView as
select field1, field2, ...
from dbo.MyTableValueFunction
Note that even if your MyTableValueFunction doesn't accept any parameters, you still need to include parentheses after it, i.e.:
... from dbo.MyTableValueFunction()
Without the parentheses, you'll get an "Invalid object name" error.
In [5]:
import pandas as pd
test = {
383: 3.000000,
663: 1.000000,
726: 1.000000,
737: 9.000000,
833: 8.166667
}
s = pd.Series(test)
s = s[s != 1]
s
Out[0]:
383 3.000000
737 9.000000
833 8.166667
dtype: float64
Go to the Servers view in Eclipse then right click on the server and click Open. The log files are stored in a folder realative to the path in the "Server path" field.
Since the path field is uneditable, you can also "Open Launch Configuration", click Arguments tab, copy the VM argument for catalina.base (within quotes). This is the full path of your WTP webapp directory. Copying the value to the clipboard can save you the laborious task of browsing the file system to the path.
Also note you should be seeing the output to the log file in your Console view as you run or debug.
width: 80vmin; height: 80vmin;
CSS does 80% of the smallest view, height or width
if (!empty($web)) {
?>
<span class="field-label">Website: </span><a href="http://<?php the_field('website'); ?>" target="_blank"><?php the_field('website'); ?></a>
<?php
} else { echo "Niente";}
From inside your Symfony project, you can get the value in PHP this way:
$symfony_version = \Symfony\Component\HttpKernel\Kernel::VERSION;
I have read several articles discouraging inheritance in projects (as opposed to libraries/frameworks), and encouraging to program agaisnt interfaces, no against implementations.
They also advocate OO by composition: if you need the functions in class a and b, make c having members/fields of this type:
class C
{
private $a, $b;
public function __construct($x, $y)
{
$this->a = new A(42, $x);
$this->b = new B($y);
}
protected function DoSomething()
{
$this->a->Act();
$this->b->Do();
}
}
This has been answered a few times but here's one more just for fun :-)
public class Tmp {
public static void main(String[] args) {
System.out.println(swapChars("abcde", 0, 1));
}
private static String swapChars(String str, int lIdx, int rIdx) {
StringBuilder sb = new StringBuilder(str);
char l = sb.charAt(lIdx), r = sb.charAt(rIdx);
sb.setCharAt(lIdx, r);
sb.setCharAt(rIdx, l);
return sb.toString();
}
}
Open it in word or any file editor for edit
...
-XX:MaxPermSize=256m
-Xms40m
-Xmx512m
...
Replace -Xmx512m to -Xmx1024m
There are differing opinions on whether accessing state in action creators is a good idea:
state.something.items
in an action creator is definitely an anti-pattern and is discouraged because it obscured the change history: if there is a bug and items
are incorrect, it is hard to trace where those incorrect values come from because they are already part of the action, rather than directly computed by a reducer in response to an action. So do this with care."getState
in thunks - that's why it exists. He discusses the pros and cons of accessing state in action creators in his blog post Idiomatic Redux: Thoughts on Thunks, Sagas, Abstraction, and Reusability.If you find that you need this, both approaches you suggested are fine. The first approach does not require any middleware:
import store from '../store';
export const SOME_ACTION = 'SOME_ACTION';
export function someAction() {
return {
type: SOME_ACTION,
items: store.getState().otherReducer.items,
}
}
However you can see that it relies on store
being a singleton exported from some module. We don’t recommend that because it makes it much harder to add server rendering to your app because in most cases on the server you’ll want to have a separate store per request. So while technically this approach works, we don’t recommend exporting a store from a module.
This is why we recommend the second approach:
export const SOME_ACTION = 'SOME_ACTION';
export function someAction() {
return (dispatch, getState) => {
const {items} = getState().otherReducer;
dispatch(anotherAction(items));
}
}
It would require you to use Redux Thunk middleware but it works fine both on the client and on the server. You can read more about Redux Thunk and why it’s necessary in this case here.
Ideally, your actions should not be “fat” and should contain as little information as possible, but you should feel free to do what works best for you in your own application. The Redux FAQ has information on splitting logic between action creators and reducers and times when it may be useful to use getState
in an action creator.
this is because str is pointing to a string literal means a constant string ...but you are trying to modify it by copying . Note : if it would have been an error due to memory allocation it would have been given segmentation fault at the run time .But this error is coming due to constant string modification or you can go through the below for more details abt bus error :
Bus errors are rare nowadays on x86 and occur when your processor cannot even attempt the memory access requested, typically:
Segmentation faults occur when accessing memory which does not belong to your process, they are very common and are typically the result of:
To be more precise this is not manipulating the pointer itself that will cause issues, it's accessing the memory it points to (dereferencing).
Sass (Syntactically Awesome StyleSheets) have two syntaxes:
So they are both part of Sass preprocessor with two different possible syntaxes.
The most important difference between SCSS and original Sass:
SCSS:
Syntax is similar to CSS (so much that every regular valid CSS3 is also valid SCSS, but the relationship in the other direction obviously does not happen)
Uses braces {}
;
:
@mixin
directive@include
directiveOriginal Sass:
=
instead of :
=
sign+
signSome prefer Sass, the original syntax - while others prefer SCSS. Either way, but it is worth noting that Sass’s indented syntax has not been and will never be deprecated.
Conversions with sass-convert:
# Convert Sass to SCSS
$ sass-convert style.sass style.scss
# Convert SCSS to Sass
$ sass-convert style.scss style.sass
Use TimeSpan.Parse
to convert the string
http://msdn.microsoft.com/en-us/library/system.timespan.parse(v=vs.110).aspx
The easiest way is to use to_datetime
:
df['col'] = pd.to_datetime(df['col'])
It also offers a dayfirst
argument for European times (but beware this isn't strict).
Here it is in action:
In [11]: pd.to_datetime(pd.Series(['05/23/2005']))
Out[11]:
0 2005-05-23 00:00:00
dtype: datetime64[ns]
You can pass a specific format:
In [12]: pd.to_datetime(pd.Series(['05/23/2005']), format="%m/%d/%Y")
Out[12]:
0 2005-05-23
dtype: datetime64[ns]
Expected to find it in C:\Program Files\Java\jre6\lib\tools.jar
if you have installed jdk then
..Java/jdkx.x.x
folder must exist there so in stall it and give full path like
C:\Program Files\Java\jdk1.6.0\lib\tools.jar
Before increasing the max_connections variable, you have to check how many non-interactive connection you have by running show processlist command.
If you have many sleep connection, you have to decrease the value of the "wait_timeout" variable to close non-interactive connection after waiting some times.
SHOW SESSION VARIABLES LIKE 'wait_timeout';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 28800 |
+---------------+-------+
the value is in second, it means that non-interactive connection still up to 8 hours.
SET session wait_timeout=600; Query OK, 0 rows affected (0.00 sec)
After 10 minutes if the sleep connection still sleeping the mysql or MariaDB drop that connection.
You are very close; currently you compare the row[2]
with integer 0
, make the comparison with the string "0"
. When you read the data from a file, it is a string and not an integer, so that is why your integer check fails currently:
row[2]!="0":
Also, you can use the with
keyword to make the current code slightly more pythonic so that the lines in your code are reduced and you can omit the .close
statements:
import csv
with open('first.csv', 'rb') as inp, open('first_edit.csv', 'wb') as out:
writer = csv.writer(out)
for row in csv.reader(inp):
if row[2] != "0":
writer.writerow(row)
Note that input
is a Python builtin, so I've used another variable name instead.
Edit: The values in your csv file's rows are comma and space separated; In a normal csv, they would be simply comma separated and a check against "0"
would work, so you can either use strip(row[2]) != 0
, or check against " 0"
.
The better solution would be to correct the csv format, but in case you want to persist with the current one, the following will work with your given csv file format:
$ cat test.py
import csv
with open('first.csv', 'rb') as inp, open('first_edit.csv', 'wb') as out:
writer = csv.writer(out)
for row in csv.reader(inp):
if row[2] != " 0":
writer.writerow(row)
$ cat first.csv
6.5, 5.4, 0, 320
6.5, 5.4, 1, 320
$ python test.py
$ cat first_edit.csv
6.5, 5.4, 1, 320
If you have installed both Wampmanager and also Bitnami's wampstack on your Windows box (like I had done), make sure Bitnami has not been set to automatically start its wampstackApache and wampstackMySQL services at startup.
To check/fix this, click: Start-->Run and then type services.msc and click Ok.
Select Services in the list on the left and sort the services on Name. Scroll to the "w's". If wampstackApache and/or wampstackMySQL services are already started, right-click and stop both. Then Restart All Services from the Wampmanager W icon in the windows desktop services tray. The W should go green.
If this was your problem you can change the default startup behavior to manual start wampstackApache and wampstackMySQL in their Properties tabs.
if you want to see it graphically you can use
gitk -- foo/A
Actually, there's a solution with a moment.js plugin and it's very easy.
Don't reinvent the wheel again.
Just plug Moment.js Date Range Plugin.
var starts = moment('2014-02-03 12:53:12');_x000D_
var ends = moment();_x000D_
_x000D_
var duration = moment.duration(ends.diff(starts));_x000D_
_x000D_
// with ###moment precise date range plugin###_x000D_
// it will tell you the difference in human terms_x000D_
_x000D_
var diff = moment.preciseDiff(starts, ends, true); _x000D_
// example: { "years": 2, "months": 7, "days": 0, "hours": 6, "minutes": 29, "seconds": 17, "firstDateWasLater": false }_x000D_
_x000D_
_x000D_
// or as string:_x000D_
var diffHuman = moment.preciseDiff(starts, ends);_x000D_
// example: 2 years 7 months 6 hours 29 minutes 17 seconds_x000D_
_x000D_
document.getElementById('output1').innerHTML = JSON.stringify(diff)_x000D_
document.getElementById('output2').innerHTML = diffHuman
_x000D_
<html>_x000D_
<head>_x000D_
_x000D_
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.14.1/moment.min.js"></script>_x000D_
_x000D_
<script src="https://raw.githubusercontent.com/codebox/moment-precise-range/master/moment-precise-range.js"></script>_x000D_
_x000D_
</head>_x000D_
<body>_x000D_
_x000D_
<h2>Difference between "NOW and 2014-02-03 12:53:12"</h2>_x000D_
<span id="output1"></span>_x000D_
<br />_x000D_
<span id="output2"></span>_x000D_
_x000D_
</body>_x000D_
</html>
_x000D_
This error occurs because VMware has a bug that uses the absolute path of the disk file in certain situations.
If you look at the top of that small *.vmdk file you'll likely see an incorrect absolute path to the original VMDK file that needs to be corrected.
For completeness, in addition to the other answers, if the thread is interrupted before it blocks on Object.wait(..)
or Thread.sleep(..)
etc., this is equivalent to it being interrupted immediately upon blocking on that method, as the following example shows.
public class InterruptTest {
public static void main(String[] args) {
Thread.currentThread().interrupt();
printInterrupted(1);
Object o = new Object();
try {
synchronized (o) {
printInterrupted(2);
System.out.printf("A Time %d\n", System.currentTimeMillis());
o.wait(100);
System.out.printf("B Time %d\n", System.currentTimeMillis());
}
} catch (InterruptedException ie) {
System.out.printf("WAS interrupted\n");
}
System.out.printf("C Time %d\n", System.currentTimeMillis());
printInterrupted(3);
Thread.currentThread().interrupt();
printInterrupted(4);
try {
System.out.printf("D Time %d\n", System.currentTimeMillis());
Thread.sleep(100);
System.out.printf("E Time %d\n", System.currentTimeMillis());
} catch (InterruptedException ie) {
System.out.printf("WAS interrupted\n");
}
System.out.printf("F Time %d\n", System.currentTimeMillis());
printInterrupted(5);
try {
System.out.printf("G Time %d\n", System.currentTimeMillis());
Thread.sleep(100);
System.out.printf("H Time %d\n", System.currentTimeMillis());
} catch (InterruptedException ie) {
System.out.printf("WAS interrupted\n");
}
System.out.printf("I Time %d\n", System.currentTimeMillis());
}
static void printInterrupted(int n) {
System.out.printf("(%d) Am I interrupted? %s\n", n,
Thread.currentThread().isInterrupted() ? "Yes" : "No");
}
}
Output:
$ javac InterruptTest.java
$ java -classpath "." InterruptTest
(1) Am I interrupted? Yes
(2) Am I interrupted? Yes
A Time 1399207408543
WAS interrupted
C Time 1399207408543
(3) Am I interrupted? No
(4) Am I interrupted? Yes
D Time 1399207408544
WAS interrupted
F Time 1399207408544
(5) Am I interrupted? No
G Time 1399207408545
H Time 1399207408668
I Time 1399207408669
Implication: if you loop like the following, and the interrupt occurs at the exact moment when control has left Thread.sleep(..)
and is going around the loop, the exception is still going to occur. So it is perfectly safe to rely on the InterruptedException being reliably thrown after the thread has been interrupted:
while (true) {
try {
Thread.sleep(10);
} catch (InterruptedException ie) {
break;
}
}
Removing double-quotes
solve my problem, now its:
DEBUG_STORE_PASSWORD=androiddebug
DEBUG_KEY_ALIAS=androiddebug
DEBUG_KEY_PASSWORD=androiddebug
You can do something like this in your click or celldoubleclick event of your grid(if you used one)
if(dgEmp.CurrentRow.Index != -1)
{
employ.Id = (Int32)dgEmp.CurrentRow.Cells["Id"].Value;
//Some other stuff here
}
Then do something like this in your Delete Button:
using(Context context = new Context())
{
var entry = context.Entry(employ);
if(entry.State == EntityState.Detached)
{
//Attached it since the record is already being tracked
context.Employee.Attach(employ);
}
//Use Remove method to remove it virtually from the memory
context.Employee.Remove(employ);
//Finally, execute SaveChanges method to finalized the delete command
//to the actual table
context.SaveChanges();
//Some stuff here
}
Alternatively, you can use a LINQ Query instead of using LINQ To Entities Query:
var query = (from emp in db.Employee
where emp.Id == employ.Id
select emp).Single();
employ.Id is used as filtering parameter which was already passed from the CellDoubleClick Event of your DataGridView.
The following groovy script would be useful, if your job does not use "Source Code Management" directly (likewise "Git Parameter Plugin"), but still have access to a local (cloned) git repository:
import jenkins.model.Jenkins
def envVars = Jenkins.instance.getNodeProperties()[0].getEnvVars()
def GIT_PROJECT_PATH = envVars.get('GIT_PROJECT_PATH')
def gettags = "git ls-remote -t --heads origin".execute(null, new File(GIT_PROJECT_PATH))
return gettags.text.readLines()
.collect { it.split()[1].replaceAll('\\^\\{\\}', '').replaceAll('refs/\\w+/', '') }
.unique()
See full explanation here: https://stackoverflow.com/a/37810768/658497
Django 2.2 version now has a bulk_update
method (release notes).
https://docs.djangoproject.com/en/stable/ref/models/querysets/#bulk-update
Example:
# get a pk: record dictionary of existing records
updates = YourModel.objects.filter(...).in_bulk()
....
# do something with the updates dict
....
if hasattr(YourModel.objects, 'bulk_update') and updates:
# Use the new method
YourModel.objects.bulk_update(updates.values(), [list the fields to update], batch_size=100)
else:
# The old & slow way
with transaction.atomic():
for obj in updates.values():
obj.save(update_fields=[list the fields to update])
You can also use python's collections.Chainmap which was introduced in python 3.3.
from collections import Chainmap
c = Chainmap(a, b)
c['a'] # returns 1
This has a few possible advantages, depending on your use-case. They are explained in more detail here, but I'll give a brief overview:
This mainly makes it useful for things like configuration dictionaries.
Remember to use the server's local name, not the domain name, when resolving the name
IIS AppPool\DefaultAppPool
(just a reminder because this tripped me up for a bit):
Tables work differently; sometimes counter-intuitively.
The solution is to use width
on the table cells instead of max-width
.
Although it may sound like in that case the cells won't shrink below the given width, they will actually.
with no restrictions on c, if you give the table a width of 70px, the widths of a, b and c will come out as 16, 42 and 12 pixels, respectively.
With a table width of 400 pixels, they behave like you say you expect in your grid above.
Only when you try to give the table too small a size (smaller than a.min+b.min+the content of C) will it fail: then the table itself will be wider than specified.
I made a snippet based on your fiddle, in which I removed all the borders and paddings and border-spacing, so you can measure the widths more accurately.
table {_x000D_
width: 70px;_x000D_
}_x000D_
_x000D_
table, tbody, tr, td {_x000D_
margin: 0;_x000D_
padding: 0;_x000D_
border: 0;_x000D_
border-spacing: 0;_x000D_
}_x000D_
_x000D_
.a, .c {_x000D_
background-color: red;_x000D_
}_x000D_
_x000D_
.b {_x000D_
background-color: #F77;_x000D_
}_x000D_
_x000D_
.a {_x000D_
min-width: 10px;_x000D_
width: 20px;_x000D_
max-width: 20px;_x000D_
}_x000D_
_x000D_
.b {_x000D_
min-width: 40px;_x000D_
width: 45px;_x000D_
max-width: 45px;_x000D_
}_x000D_
_x000D_
.c {}
_x000D_
<table>_x000D_
<tr>_x000D_
<td class="a">A</td>_x000D_
<td class="b">B</td>_x000D_
<td class="c">C</td>_x000D_
</tr>_x000D_
</table>
_x000D_
This example works for any double, not just positive integers and takes into account negative numbers or those less than one. For example, 0.000053 would return 5.
private static int getMostSignificantDigit(double value) {
value = Math.abs(value);
if (value == 0) return 0;
while (value < 1) value *= 10;
char firstChar = String.valueOf(value).charAt(0);
return Integer.parseInt(firstChar + "");
}
To get the first digit, this sticks with String manipulation as it is far easier to read.
Check my answer here : https://stackoverflow.com/a/50039345/2647919
"OR, even better just the ol' "drag and drop" the folder, onto your repository opened in git browser.
Open your repository in the web portal , you will see the listing of all your files. If you have just recently created the repo, and initiated with a README, you will only see the README listing.
Open your folder which you want to upload. drag and drop on the listing in browser. See the image here."
You can groupby
on cols 'A' and 'B' and call size
and then reset_index
and rename
the generated column:
In [26]:
df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'})
Out[26]:
A B count
0 no no 1
1 no yes 2
2 yes no 4
3 yes yes 3
update
A little explanation, by grouping on the 2 columns, this groups rows where A and B values are the same, we call size
which returns the number of unique groups:
In[202]:
df1.groupby(['A','B']).size()
Out[202]:
A B
no no 1
yes 2
yes no 4
yes 3
dtype: int64
So now to restore the grouped columns, we call reset_index
:
In[203]:
df1.groupby(['A','B']).size().reset_index()
Out[203]:
A B 0
0 no no 1
1 no yes 2
2 yes no 4
3 yes yes 3
This restores the indices but the size aggregation is turned into a generated column 0
, so we have to rename this:
In[204]:
df1.groupby(['A','B']).size().reset_index().rename(columns={0:'count'})
Out[204]:
A B count
0 no no 1
1 no yes 2
2 yes no 4
3 yes yes 3
groupby
does accept the arg as_index
which we could have set to False
so it doesn't make the grouped columns the index, but this generates a series
and you'd still have to restore the indices and so on....:
In[205]:
df1.groupby(['A','B'], as_index=False).size()
Out[205]:
A B
no no 1
yes 2
yes no 4
yes 3
dtype: int64
Late but document fragment could be a node:
function hasChild(el){
var child = el && el.firstChild;
while (child) {
if (child.nodeType === 1 || child.nodeType === 11) {
return true;
}
child = child.nextSibling;
}
return false;
}
// or
function hasChild(el){
for (var i = 0; el && el.childNodes[i]; i++) {
if (el.childNodes[i].nodeType === 1 || el.childNodes[i].nodeType === 11) {
return true;
}
}
return false;
}
See:
https://github.com/k-gun/so/blob/master/so.dom.js#L42
https://github.com/k-gun/so/blob/master/so.dom.js#L741
./gradlew clean
Uses your project's gradle wrapper to execute your project's clean
task. Usually, this just means the deletion of the build directory.
./gradlew clean assembleDebug
Again, uses your project's gradle wrapper to execute the clean
and assembleDebug
tasks, respectively. So, it will clean first, then execute assembleDebug
, after any non-up-to-date dependent tasks.
./gradlew clean :assembleDebug
Is essentially the same as #2. The colon represents the task path. Task paths are essential in gradle multi-project's, not so much in this context. It means run the root project's assembleDebug task. Here, the root project is the only project.
Android Studio --> Build --> Clean
Is essentially the same as ./gradlew clean
. See here.
For more info, I suggest taking the time to read through the Android docs, especially this one.
str1
is the original string with deleted non-alphanumeric characters and spaces and str2
is the original string reversed.
function palindrome(str) {
var str1 = str.toLowerCase().replace(/\s/g, '').replace(
/[^a-zA-Z 0-9]/gi, "");
var str2 = str.toLowerCase().replace(/\s/g, '').replace(
/[^a-zA-Z 0-9]/gi, "").split("").reverse().join("");
if (str1 === str2) {
return true;
}
return false;
}
palindrome("almostomla");
This little class never gives me any problem, just extend it and use the copy() method:
import simplejson as json
class BlindCopy(object):
def copy(self, json_str):
dic = json.loads(json_str)
for k, v in dic.iteritems():
if hasattr(self, k):
setattr(self, k, v);
The default for ResultSet.getInt
when the field value is NULL
is to return 0
, which is also the default value for your iVal
declaration. In which case your test is completely redundant.
If you actually want to do something different if the field value is NULL, I suggest:
int iVal = 0;
ResultSet rs = magicallyAppearingStmt.executeQuery(query);
if (rs.next()) {
iVal = rs.getInt("ID_PARENT");
if (rs.wasNull()) {
// handle NULL field value
}
}
(Edited as @martin comments below; the OP code as written would not compile because iVal
is not initialised)
If someone ends up here from google, looking to convert a single figure to a .pdf (that was what I was looking for):
import matplotlib.pyplot as plt
f = plt.figure()
plt.plot(range(10), range(10), "o")
plt.show()
f.savefig("foo.pdf", bbox_inches='tight')
You could always check out JMF (Java Media Framework). It is pretty old and abandoned, but it works and I've used it for apps before. Looks like it handles what you're asking for.
native ES6 solution:
const pos = data.findIndex(el => el.id === ID_TO_REMOVE);
if (pos >= 0)
data.splice(pos, 1);
if you know that the element is in the array for sure:
data.splice(data.findIndex(el => el.id === ID_TO_REMOVE), 1);
prototype:
Array.prototype.removeByProp = function(prop,val) {
const pos = this.findIndex(x => x[prop] === val);
if (pos >= 0)
return this.splice(pos, 1);
};
// usage:
ar.removeByProp('id', ID_TO_REMOVE);
http://jsfiddle.net/oriadam/72kgprw5/
note: this removes the item in-place. if you need a new array use filter
as mentioned in previous answers.
There are two ways to think about your phrase "application heap size available":
How much heap can my app use before a hard error is triggered? And
How much heap should my app use, given the constraints of the Android OS version and hardware of the user's device?
There is a different method for determining each of the above.
For item 1 above: maxMemory()
which can be invoked (e.g., in your main activity's onCreate()
method) as follows:
Runtime rt = Runtime.getRuntime();
long maxMemory = rt.maxMemory();
Log.v("onCreate", "maxMemory:" + Long.toString(maxMemory));
This method tells you how many total bytes of heap your app is allowed to use.
For item 2 above: getMemoryClass()
which can be invoked as follows:
ActivityManager am = (ActivityManager) getSystemService(ACTIVITY_SERVICE);
int memoryClass = am.getMemoryClass();
Log.v("onCreate", "memoryClass:" + Integer.toString(memoryClass));
This method tells you approximately how many megabytes of heap your app should use if it wants to be properly respectful of the limits of the present device, and of the rights of other apps to run without being repeatedly forced into the onStop()
/ onResume()
cycle as they are rudely flushed out of memory while your elephantine app takes a bath in the Android jacuzzi.
This distinction is not clearly documented, so far as I know, but I have tested this hypothesis on five different Android devices (see below) and have confirmed to my own satisfaction that this is a correct interpretation.
For a stock version of Android, maxMemory()
will typically return about the same number of megabytes as are indicated in getMemoryClass()
(i.e., approximately a million times the latter value).
The only situation (of which I am aware) for which the two methods can diverge is on a rooted device running an Android version such as CyanogenMod, which allows the user to manually select how large a heap size should be allowed for each app. In CM, for example, this option appears under "CyanogenMod settings" / "Performance" / "VM heap size".
NOTE: BE AWARE THAT SETTING THIS VALUE MANUALLY CAN MESS UP YOUR SYSTEM, ESPECIALLY if you select a smaller value than is normal for your device.
Here are my test results showing the values returned by maxMemory()
and getMemoryClass()
for four different devices running CyanogenMod, using two different (manually-set) heap values for each:
In addition to the above, I tested on a Novo7 Paladin tablet running Ice Cream Sandwich. This was essentially a stock version of ICS, except that I've rooted the tablet through a simple process that does not replace the entire OS, and in particular does not provide an interface that would allow the heap size to be manually adjusted.
For that device, here are the results:
Also (per Kishore in a comment below):
And (per akauppi's comment):
Per a comment from cmcromance:
And (per tencent's comments):
Other Devices
I haven't tested these two methods using the special android:largeHeap="true" manifest option available since Honeycomb, but thanks to cmcromance and tencent we do have some sample largeHeap values, as reported above.
My expectation (which seems to be supported by the largeHeap numbers above) would be that this option would have an effect similar to setting the heap manually via a rooted OS - i.e., it would raise the value of maxMemory()
while leaving getMemoryClass()
alone. There is another method, getLargeMemoryClass(), that indicates how much memory is allowable for an app using the largeHeap setting. The documentation for getLargeMemoryClass() states, "most applications should not need this amount of memory, and should instead stay with the getMemoryClass() limit."
If I've guessed correctly, then using that option would have the same benefits (and perils) as would using the space made available by a user who has upped the heap via a rooted OS (i.e., if your app uses the additional memory, it probably will not play as nicely with whatever other apps the user is running at the same time).
Note that the memory class apparently need not be a multiple of 8MB.
We can see from the above that the getMemoryClass()
result is unchanging for a given device/OS configuration, while the maxMemory() value changes when the heap is set differently by the user.
My own practical experience is that on the G1 (which has a memory class of 16), if I manually select 24MB as the heap size, I can run without erroring even when my memory usage is allowed to drift up toward 20MB (presumably it could go as high as 24MB, although I haven't tried this). But other similarly large-ish apps may get flushed from memory as a result of my own app's pigginess. And, conversely, my app may get flushed from memory if these other high-maintenance apps are brought to the foreground by the user.
So, you cannot go over the amount of memory specified by maxMemory()
. And, you should try to stay within the limits specified by getMemoryClass()
. One way to do that, if all else fails, might be to limit functionality for such devices in a way that conserves memory.
Finally, if you do plan to go over the number of megabytes specified in getMemoryClass()
, my advice would be to work long and hard on the saving and restoring of your app's state, so that the user's experience is virtually uninterrupted if an onStop()
/ onResume()
cycle occurs.
In my case, for reasons of performance I'm limiting my app to devices running 2.2 and above, and that means that almost all devices running my app will have a memoryClass of 24 or higher. So I can design to occupy up to 20MB of heap and feel pretty confident that my app will play nice with the other apps the user may be running at the same time.
But there will always be a few rooted users who have loaded a 2.2 or above version of Android onto an older device (e.g., a G1). When you encounter such a configuration, ideally, you ought to pare down your memory use, even if maxMemory()
is telling you that you can go much higher than the 16MB that getMemoryClass()
is telling you that you should be targeting. And if you cannot reliably ensure that your app will live within that budget, then at least make sure that onStop()
/ onResume()
works seamlessly.
getMemoryClass()
, as indicated by Diane Hackborn (hackbod) above, is only available back to API level 5 (Android 2.0), and so, as she advises, you can assume that the physical hardware of any device running an earlier version of the OS is designed to optimally support apps occupying a heap space of no more than 16MB.
By contrast, maxMemory()
, according to the documentation, is available all the way back to API level 1. maxMemory()
, on a pre-2.0 version, will probably return a 16MB value, but I do see that in my (much later) CyanogenMod versions the user can select a heap value as low as 12MB, which would presumably result in a lower heap limit, and so I would suggest that you continue to test the maxMemory()
value, even for versions of the OS prior to 2.0. You might even have to refuse to run in the unlikely event that this value is set even lower than 16MB, if you need to have more than maxMemory()
indicates is allowed.
You can use the slice operator like this:
l = [1,2,3,4,5]
l2 = l[::2] # get subsequent 2nd item
The following is the right way to alter topic config as of Kafka 0.10.2.0:
bin/kafka-configs.sh --zookeeper <zk_host> --alter --entity-type topics --entity-name test_topic --add-config retention.ms=86400000
Topic config alter operations have been deprecated for bin/kafka-topics.sh
.
WARNING: Altering topic configuration from this script has been deprecated and may be removed in future releases.
Going forward, please use kafka-configs.sh for this functionality`
In my xml file, the header looked like this:
<?xml version="1.0" encoding="utf-16"? />
In a test file, I was reading the file bytes and decoding the data as UTF-8 (not realizing the header in this file was utf-16) to create a string.
byte[] data = Files.readAllBytes(Paths.get(path));
String dataString = new String(data, "UTF-8");
When I tried to deserialize this string into an object, I was seeing the same error:
javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1]
Message: Content is not allowed in prolog.
When I updated the second line to
String dataString = new String(data, "UTF-16");
I was able to deserialize the object just fine. So as Romain had noted above, the encodings need to match.
This is a copy/pastable function to be used based on the previous responses.
/**
* @param context
* @return the Screen height in DP
*/
public static float getHeightDp(Context context) {
DisplayMetrics displayMetrics = context.getResources().getDisplayMetrics();
float dpHeight = displayMetrics.heightPixels / displayMetrics.density;
return dpHeight;
}
/**
* @param context
* @return the screnn width in dp
*/
public static float getWidthDp(Context context) {
DisplayMetrics displayMetrics = context.getResources().getDisplayMetrics();
float dpWidth = displayMetrics.widthPixels / displayMetrics.density;
return dpWidth;
}
You can do this using cross apply
SELECT c.BalanceDue AS BalanceDue
FROM Invoices
cross apply (select (InvoiceTotal - PaymentTotal - CreditTotal) as BalanceDue) as c
WHERE c.BalanceDue > 0;
I came across this issue when not using async/await on a asynchronous function (api call). Hence adding them / using the promise handlers properly cleared the error.
"By default, when an application calls DriverManager.getConnection(url, ...)
and the database specified in the URL does not yet exist, a new (empty) database is created."—H2 Database.
Addendum: @Thomas Mueller shows how to Execute SQL on Connection, but I sometimes just create and populate in the code, as suggested below.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
/** @see http://stackoverflow.com/questions/5225700 */
public class H2MemTest {
public static void main(String[] args) throws Exception {
Connection conn = DriverManager.getConnection("jdbc:h2:mem:", "sa", "");
Statement st = conn.createStatement();
st.execute("create table customer(id integer, name varchar(10))");
st.execute("insert into customer values (1, 'Thomas')");
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery("select name from customer");
while (rset.next()) {
String name = rset.getString(1);
System.out.println(name);
}
}
}
Using @joran's sample data,
ggplot(dat, aes(x=xx, fill=yy)) + geom_histogram(alpha=0.2, position="identity")
note that the default position of geom_histogram
is "stack."
see "position adjustment" of this page:
Expanding on @Tomalak's answer. The formula works for days other than Sunday and Monday but you need to use different values for where the 5 is. A way to arrive at the value you need is
Value Needed = 7 - (Value From Date First Documentation for Desired Day Of Week) - 1
here is a link to the document: https://msdn.microsoft.com/en-us/library/ms181598.aspx
And here is a table that lays it out for you.
| DATEFIRST VALUE | Formula Value | 7 - DATEFIRSTVALUE - 1
Monday | 1 | 5 | 7 - 1- 1 = 5
Tuesday | 2 | 4 | 7 - 2 - 1 = 4
Wednesday | 3 | 3 | 7 - 3 - 1 = 3
Thursday | 4 | 2 | 7 - 4 - 1 = 2
Friday | 5 | 1 | 7 - 5 - 1 = 1
Saturday | 6 | 0 | 7 - 6 - 1 = 0
Sunday | 7 | -1 | 7 - 7 - 1 = -1
But you don't have to remember that table and just the formula, and actually you could use a slightly different one too the main need is to use a value that will make the remainder the correct number of days.
Here is a working example:
DECLARE @MondayDateFirstValue INT = 1
DECLARE @FridayDateFirstValue INT = 5
DECLARE @TestDate DATE = GETDATE()
SET @MondayDateFirstValue = 7 - @MondayDateFirstValue - 1
SET @FridayDateFirstValue = 7 - @FridayDateFirstValue - 1
SET DATEFIRST 6 -- notice this is saturday
SELECT
DATEADD(DAY, 0 - (@@DATEFIRST + @MondayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as MondayStartOfWeek
,DATEADD(DAY, 6 - (@@DATEFIRST + @MondayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as MondayEndOfWeek
,DATEADD(DAY, 0 - (@@DATEFIRST + @FridayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as FridayStartOfWeek
,DATEADD(DAY, 6 - (@@DATEFIRST + @FridayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as FridayEndOfWeek
SET DATEFIRST 2 --notice this is tuesday
SELECT
DATEADD(DAY, 0 - (@@DATEFIRST + @MondayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as MondayStartOfWeek
,DATEADD(DAY, 6 - (@@DATEFIRST + @MondayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as MondayEndOfWeek
,DATEADD(DAY, 0 - (@@DATEFIRST + @FridayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as FridayStartOfWeek
,DATEADD(DAY, 6 - (@@DATEFIRST + @FridayDateFirstValue + DATEPART(dw,@TestDate)) % 7, @TestDate) as FridayEndOfWeek
This method would be agnostic of the DATEFIRST
Setting which is what I needed as I am building out a date dimension with multiple week methods included.
For VS2017 and 2019... with the modern core library SDK project files, the platform can be changed during the build process. Here's an example to change to the anycpu
platform, just before the built-in CoreCompile
task runs:
<Project Sdk="Microsoft.NET.Sdk" >
<Target Name="SwitchToAnyCpu" BeforeTargets="CoreCompile" >
<Message Text="Current Platform=$(Platform)" />
<Message Text="Current PlatformTarget=$(PlatformName)" />
<PropertyGroup>
<Platform>anycpu</Platform>
<PlatformTarget>anycpu</PlatformTarget>
</PropertyGroup>
<Message Text="New Platform=$(Platform)" />
<Message Text="New PlatformTarget=$(PlatformTarget)" />
</Target>
</Project>
In my case, I'm building an FPGA with BeforeTargets
and AfterTargets
tasks, but compiling a C# app in the main CoreCompile
. (partly as I may want some sort of command-line app, and partly because I could not figure out how to omit or override CoreCompile
)
To build for multiple, concurrent binaries such as x86 and x64: either a separate, manual build task would be needed or two separate project files with the respective <PlatformTarget>x86</PlatformTarget>
and <PlatformTarget>x64</PlatformTarget>
settings in the example, above.
The big difference is on Windows. Mercurial is supported natively, Git isn't. You can get very similar hosting to github.com with bitbucket.org (actually even better as you get a free private repository). I was using msysGit for a while but moved to Mercurial and been very happy with it.
This works. Try it.
<BODY ondragstart="return false;" ondrop="return false;">
This will work:
tr:hover {
background: #000 !important;
}
If you want to only apply bg-color on TD then:
tr:hover td {
background: #c7d4dd !important;
}
It will even overwrite your given color and apply this forcefully.
Trailing whitespace is any spaces or tabs after the last non-whitespace character on the line until the newline.
In your posted question, there is one extra space after try:
, and there are 12 extra spaces after pass
:
>>> post_text = '''\
... if self.tagname and self.tagname2 in list1:
... try:
... question = soup.find("div", "post-text")
... title = soup.find("a", "question-hyperlink")
... self.list2.append(str(title)+str(question)+url)
... current += 1
... except AttributeError:
... pass
... logging.info("%s questions passed, %s questions \
... collected" % (count, current))
... count += 1
... return self.list2
... '''
>>> for line in post_text.splitlines():
... if line.rstrip() != line:
... print(repr(line))
...
' try: '
' pass '
See where the strings end? There are spaces before the lines (indentation), but also spaces after.
Use your editor to find the end of the line and backspace. Many modern text editors can also automatically remove trailing whitespace from the end of the line, for example every time you save a file.
Try adding a style="width:100%;" to the img tag. That way the image will fill up the entire width of the page, thus scaling down if the image is larger than the viewport.
Try the code below. e.preventDefault() was added. This removes the default event action for the form.
$(document).ready(function () {
$("form").submit(function (e) {
$.ajax({
url: '@Url.Action("HasJobInProgress", "ClientChoices")/',
data: { id: '@Model.ClientId' },
success: function (data) {
showMsg(data, e);
},
cache: false
});
e.preventDefault();
});
});
Also, you mentioned you wanted the form to not submit under the premise of validation, but I see no code validation here?
Here is an example of some added validation
$(document).ready(function () {
$("form").submit(function (e) {
/* put your form field(s) you want to validate here, this checks if your input field of choice is blank */
if(!$('#inputID').val()){
e.preventDefault(); // This will prevent the form submission
} else{
// In the event all validations pass. THEN process AJAX request.
$.ajax({
url: '@Url.Action("HasJobInProgress", "ClientChoices")/',
data: { id: '@Model.ClientId' },
success: function (data) {
showMsg(data, e);
},
cache: false
});
}
});
});
Reading your comments took me to a dirty fix. This is not a right way, I know, but can be a work around.
$(function() {
$( "#inputFieldId" ).autocomplete({
source: function( event, ui ) {
alert("do your functions here");
return false;
}
});
});
I was referencing a mapped drive and I found that the mapped drives are not always available to the user account that is running the scheduled task so I used \\IPADDRESS
instead of MAPDRIVELETTER:
and I am up and running.
function contentDisplay($text)
{
//replace UTF-8
$convertUT8 = array("\xe2\x80\x98", "\xe2\x80\x99", "\xe2\x80\x9c", "\xe2\x80\x9d", "\xe2\x80\x93", "\xe2\x80\x94", "\xe2\x80\xa6");
$to = array("'", "'", '"', '"', '-', '--', '...');
$text = str_replace($convertUT8,$to,$text);
//replace Windows-1252
$convertWin1252 = array(chr(145), chr(146), chr(147), chr(148), chr(150), chr(151), chr(133));
$to = array("'", "'", '"', '"', '-', '--', '...');
$text = str_replace($convertWin1252,$to,$text);
//replace accents
$convertAccents = array('À', 'Á', 'Â', 'Ã', 'Ä', 'Å', 'Æ', 'Ç', 'È', 'É', 'Ê', 'Ë', 'Ì', 'Í', 'Î', 'Ï', 'Ð', 'Ñ', 'Ò', 'Ó', 'Ô', 'Õ', 'Ö', 'Ø', 'Ù', 'Ú', 'Û', 'Ü', 'Ý', 'ß', 'à', 'á', 'â', 'ã', 'ä', 'å', 'æ', 'ç', 'è', 'é', 'ê', 'ë', 'ì', 'í', 'î', 'ï', 'ñ', 'ò', 'ó', 'ô', 'õ', 'ö', 'ø', 'ù', 'ú', 'û', 'ü', 'ý', 'ÿ', 'A', 'a', 'A', 'a', 'A', 'a', 'C', 'c', 'C', 'c', 'C', 'c', 'C', 'c', 'D', 'd', 'Ð', 'd', 'E', 'e', 'E', 'e', 'E', 'e', 'E', 'e', 'E', 'e', 'G', 'g', 'G', 'g', 'G', 'g', 'G', 'g', 'H', 'h', 'H', 'h', 'I', 'i', 'I', 'i', 'I', 'i', 'I', 'i', 'I', 'i', '?', '?', 'J', 'j', 'K', 'k', 'L', 'l', 'L', 'l', 'L', 'l', '?', '?', 'L', 'l', 'N', 'n', 'N', 'n', 'N', 'n', '?', 'O', 'o', 'O', 'o', 'O', 'o', 'Œ', 'œ', 'R', 'r', 'R', 'r', 'R', 'r', 'S', 's', 'S', 's', 'S', 's', 'Š', 'š', 'T', 't', 'T', 't', 'T', 't', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'W', 'w', 'Y', 'y', 'Ÿ', 'Z', 'z', 'Z', 'z', 'Ž', 'ž', '?', 'ƒ', 'O', 'o', 'U', 'u', 'A', 'a', 'I', 'i', 'O', 'o', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', '?', '?', '?', '?', '?', '?');
$to = array('A', 'A', 'A', 'A', 'A', 'A', 'AE', 'C', 'E', 'E', 'E', 'E', 'I', 'I', 'I', 'I', 'D', 'N', 'O', 'O', 'O', 'O', 'O', 'O', 'U', 'U', 'U', 'U', 'Y', 's', 'a', 'a', 'a', 'a', 'a', 'a', 'ae', 'c', 'e', 'e', 'e', 'e', 'i', 'i', 'i', 'i', 'n', 'o', 'o', 'o', 'o', 'o', 'o', 'u', 'u', 'u', 'u', 'y', 'y', 'A', 'a', 'A', 'a', 'A', 'a', 'C', 'c', 'C', 'c', 'C', 'c', 'C', 'c', 'D', 'd', 'D', 'd', 'E', 'e', 'E', 'e', 'E', 'e', 'E', 'e', 'E', 'e', 'G', 'g', 'G', 'g', 'G', 'g', 'G', 'g', 'H', 'h', 'H', 'h', 'I', 'i', 'I', 'i', 'I', 'i', 'I', 'i', 'I', 'i', 'IJ', 'ij', 'J', 'j', 'K', 'k', 'L', 'l', 'L', 'l', 'L', 'l', 'L', 'l', 'l', 'l', 'N', 'n', 'N', 'n', 'N', 'n', 'n', 'O', 'o', 'O', 'o', 'O', 'o', 'OE', 'oe', 'R', 'r', 'R', 'r', 'R', 'r', 'S', 's', 'S', 's', 'S', 's', 'S', 's', 'T', 't', 'T', 't', 'T', 't', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'W', 'w', 'Y', 'y', 'Y', 'Z', 'z', 'Z', 'z', 'Z', 'z', 's', 'f', 'O', 'o', 'U', 'u', 'A', 'a', 'I', 'i', 'O', 'o', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'U', 'u', 'A', 'a', 'AE', 'ae', 'O', 'o');
$text = str_replace($convertAccents,$to,$text);
//Encode the characters
$text = htmlentities($text);
//normalize the line breaks (here because it applies to all text)
$text = str_replace("\r\n", "\n", $text);
$text = str_replace("\r", "\n", $text);
//decode the <code> tags
$codeOpen = htmlentities('<').'code'.htmlentities('>');
if (strpos($text, $codeOpen))
{
$text = str_replace($codeOpen, html_entity_decode(htmlentities('<')) . "code" . html_entity_decode(htmlentities('>')), $text);
}
$codeOpen = htmlentities('<').'/code'.htmlentities('>');
if (strpos($text, $codeOpen))
{
$text = str_replace($codeOpen, html_entity_decode(htmlentities('<')) . "/code" . html_entity_decode(htmlentities('>')), $text);
}
//match everything between <code> and </code>, the msU is what makes this work here, ADD this to REGEX archive
$regex = '/<code>(.*)<\/code>/msU';
$code = preg_match($regex, $text, $matches);
if ($code == 1)
{
if (is_array($matches) && count($matches) >= 2)
{
$newcode = $matches[1];
$newcode = nl2br($newcode);
}
//remove <code>and this</code> from $text;
$text = str_replace('<code>' . $matches[1] . '</code>', 'PLACEHOLDERCODE1', $text);
//convert the line breaks to paragraphs
$text = '<p>' . str_replace("\n\n", '</p><p>', $text) . '</p>';
$text = str_replace("\n" , '<br />', $text);
$text = str_replace('</p><p>', '</p>' . "\n\n" . '<p>', $text);
$text = str_replace('PLACEHOLDERCODE1', '<code>'.$newcode.'</code>', $text);
}
else
{
$code = false;
}
if ($code == false)
{
//convert the line breaks to paragraphs
$text = '<p>' . str_replace("\n\n", '</p><p>', $text) . '</p>';
$text = str_replace("\n" , '<br />', $text);
$text = str_replace('</p><p>', '</p>' . "\n\n" . '<p>', $text);
}
return $text;
}
try this:
<input type="button" onClick="document.getElementById('middle').scrollIntoView();" />
While Fosco's answer is not wrong there is a case to be considered with this one: mixed arrays. Imagine I have an array like this:
$a = array(
"nice",
"car" => "fast",
"none"
);
Now, PHP allows this kind of syntax but it has one problem: if I run Fosco's code I get 0
which is wrong for me, but why this happens?
Because when doing comparisons between strings and integers PHP converts strings to integers (and this is kinda stupid in my opinion), so when array_search()
searches for the index it stops at the first one because apparently ("car" == 0)
is true.
Setting array_search()
to strict mode won't solve the problem because then array_search("0", array_keys($a))
would return false even if an element with index 0 exists.
So my solution just converts all indexes from array_keys()
to strings and then compares them correctly:
echo array_search("car", array_map("strval", array_keys($a)));
Prints 1
, which is correct.
EDIT:
As Shaun pointed out in the comment below, the same thing applies to the index value, if you happen to search for an int index like this:
$a = array(
"foo" => "bar",
"nice",
"car" => "fast",
"none"
);
$ind = 0;
echo array_search($ind, array_map("strval", array_keys($a)));
You will always get 0
, which is wrong, so the solution would be to cast the index (if you use a variable) to a string like this:
$ind = 0;
echo array_search((string)$ind, array_map("strval", array_keys($a)));
I too was struggling with nested views in Angular.
Once I got a hold of ui-router I knew I was never going back to angular default routing functionality.
Here is an example application that uses multiple levels of views nesting
app.config(function ($stateProvider, $urlRouterProvider,$httpProvider) {
// navigate to view1 view by default
$urlRouterProvider.otherwise("/view1");
$stateProvider
.state('view1', {
url: '/view1',
templateUrl: 'partials/view1.html',
controller: 'view1.MainController'
})
.state('view1.nestedViews', {
url: '/view1',
views: {
'childView1': { templateUrl: 'partials/view1.childView1.html' , controller: 'childView1Ctrl'},
'childView2': { templateUrl: 'partials/view1.childView2.html', controller: 'childView2Ctrl' },
'childView3': { templateUrl: 'partials/view1.childView3.html', controller: 'childView3Ctrl' }
}
})
.state('view2', {
url: '/view2',
})
.state('view3', {
url: '/view3',
})
.state('view4', {
url: '/view4',
});
});
As it can be seen there are 4 main views (view1,view2,view3,view4) and view1 has 3 child views.
The generic needle haystack example is following with variables
#!/bin/bash
needle="a_needle"
haystack="a_needle another_needle a_third_needle"
if [[ $haystack == *"$needle"* ]]; then
echo "needle found"
else
echo "needle NOT found"
fi
As stated on the most voted answer until now, you can work with integers, that would mean to multiply all your factors by 10 for each decimal you are working with, and divide the result by the same number used.
For example, if you are working with 2 decimals, you multiply all your factors by 100 before doing the operation, and then divide the result by 100.
Here's an example, Result1 is the usual result, Result2 uses the solution:
var Factor1="1110.7";_x000D_
var Factor2="2220.2";_x000D_
var Result1=Number(Factor1)+Number(Factor2);_x000D_
var Result2=((Number(Factor1)*100)+(Number(Factor2)*100))/100;_x000D_
var Result3=(Number(parseFloat(Number(Factor1))+parseFloat(Number(Factor2))).toPrecision(2));_x000D_
document.write("Result1: "+Result1+"<br>Result2: "+Result2+"<br>Result3: "+Result3);
_x000D_
The third result is to show what happens when using parseFloat instead, which created a conflict in our case.
Change your syntax from {{ }}
to {!! !!}
.
As The Alpha said in a comment above (not an answer so I thought I'd post), in Laravel 5, the {{ }}
(previously non-escaped output syntax) has changed to {!! !!}
. Replace {{ }}
with {!! !!}
and it should work.
It is the severity level of the error
. The levels are from 11 - 20 which throw an error in SQL
. The higher the level, the more severe the level and the transaction
should be aborted.
You will get the syntax error when you do:
RAISERROR('Cannot Insert where salary > 1000').
Because you have not specified the correct parameters
(severity level
or state
).
If you wish to issue a warning and not an exception
, use levels 0 - 10.
From MSDN:
severity
Is the user-defined severity level associated with this message. When using msg_id to raise a user-defined message created using sp_addmessage, the severity specified on RAISERROR overrides the severity specified in sp_addmessage. Severity levels from 0 through 18 can be specified by any user. Severity levels from 19 through 25 can only be specified by members of the sysadmin fixed server role or users with ALTER TRACE permissions. For severity levels from 19 through 25, the WITH LOG option is required.
state
Is an integer from 0 through 255. Negative values or values larger than 255 generate an error. If the same user-defined error is raised at multiple locations, using a unique state number for each location can help find which section of code is raising the errors. For detailed description here
Make sure you have full-text search feature installed.
Create full-text search catalog.
use AdventureWorks
create fulltext catalog FullTextCatalog as default
select *
from sys.fulltext_catalogs
Create full-text search index.
create fulltext index on Production.ProductDescription(Description)
key index PK_ProductDescription_ProductDescriptionID
Before you create the index, make sure:
- you don't already have full-text search index on the table as only one full-text search index allowed on a table
- a unique index exists on the table. The index must be based on single-key column, that does not allow NULL.
- full-text catalog exists. You have to specify full-text catalog name explicitly if there is no default full-text catalog.
You can do step 2 and 3 in SQL Sever Management Studio. In object explorer, right click on a table, select Full-Text index
menu item and then Define Full-Text Index...
sub-menu item. Full-Text indexing wizard will guide you through the process. It will also create a full-text search catalog for you if you don't have any yet.
You can find more info at MSDN
So I experienced this yesterday (22 feb '17), I tried uploading the build via Xcode (8.2) multiple times, it showed (Processing)
. Then I tried it with the application loader, still the same. I just had to wait for ~12 hours to have it spam me with processing done emails.
So yeah, it's not you, it's them.
overflow-x: scroll;
overflow-y: hidden;
EDIT:
It works for me:
<div style='overflow-x:scroll;overflow-y:hidden;width:250px;height:200px'>
<div style='width:400px;height:250px'></div>
</div>
I have just been in a similar position with regards to setting the 777 permissions on the apache website hosting directory. After a little bit of tinkering it seems that changing the group ownership of the folder to the "apache" group allowed access to the folder based on the user group.
1) make sure that the group ownership of the folder is set to the group apache used / generates for use. (check /etc/groups, mine was www-data on Ubuntu)
2) set the folder permissions to 774 to stop "everyone" from having any change access, but allowing the owner and group permissions required.
3) add your user account to the group that has permission on the folder (mine was www-data).
The solution is inspired from the below solution by @marco. I have also updated his answer with these datails.
The problem here is about lazy loading of the sub-objects, where Jackson only finds hibernate proxies, instead of full blown objects.
So we are left with two options - Suppress the exception, like done above in most voted answer here, or make sure that the LazyLoad objects are loaded.
If you choose to go with latter option, solution would be to use jackson-datatype library, and configure the library to initialize lazy-load dependencies prior to the serialization.
I added a new config class to do that.
@Configuration
public class JacksonConfig extends WebMvcConfigurerAdapter {
@Bean
@Primary
public MappingJackson2HttpMessageConverter jacksonMessageConverter(){
MappingJackson2HttpMessageConverter messageConverter = new MappingJackson2HttpMessageConverter();
ObjectMapper mapper = new ObjectMapper();
Hibernate5Module module = new Hibernate5Module();
module.enable(Hibernate5Module.Feature.FORCE_LAZY_LOADING);
mapper.registerModule(module);
messageConverter.setObjectMapper(mapper);
return messageConverter;
}
}
@Primary
makes sure that no other Jackson config is used for initializing any other beans. @Bean
is as usual. module.enable(Hibernate5Module.Feature.FORCE_LAZY_LOADING);
is to enable Lazy loading of the dependencies.
Caution - Please watch for its performance impact. sometimes EAGER fetch helps, but even if you make it eager, you are still going to need this code, because proxy objects still exist for all other Mappings except @OneToOne
PS : As a general comment, I would discourage the practice of sending whole data object back in the Json response, One should be using Dto's for this communication, and use some mapper like mapstruct to map them. This saves you from accidental security loopholes, as well as above exception.
You can also use: finishAffinity()
Finish this activity as well as all activities immediately below it in the current task that have the same affinity.
You will always only get an indent error if there is actually an indent error. Double check that your final line is indented the same was as the other lines -- either with spaces or with tabs. Most likely, some of the lines had spaces (or tabs) and the other line had tabs (or spaces).
Trust in the error message -- if it says something specific, assume it to be true and figure out why.
You don't need to calculate anything, and probably shouldn't:
<!DOCTYPE html>
<head>
<style type="text/css">
body {background: blue; height:100%;}
header {background: red; height: 20px; width:100%}
h1 {font-size:1.2em; margin:0; padding:0;
height: 30px; font-weight: bold; background:yellow}
.theCalcDiv {background-color:green; padding-bottom: 100%}
</style>
</head>
<body>
<header>Some nav stuff here</header>
<h1>This is the heading</h1>
<div class="theCalcDiv">This blocks needs to have a CSS calc() height of 100% - the height of the other elements.
</div>
I stuck it all together for brevity.
I'm using Visual Studio with Parallels/Win 7 on a MacBook laptop keyboard and the only thing that worked was Fn + Enter/Return (that's the Mac shortcut for Insert).
The ffmpeg wiki links back to this page in reference to "How to split video efficiently". I'm not convinced this page answers that question, so I did as @AlcubierreDrive suggested…
echo "Two commands"
time ffmpeg -v quiet -y -i input.ts -vcodec copy -acodec copy -ss 00:00:00 -t 00:30:00 -sn test1.mkv
time ffmpeg -v quiet -y -i input.ts -vcodec copy -acodec copy -ss 00:30:00 -t 01:00:00 -sn test2.mkv
echo "One command"
time ffmpeg -v quiet -y -i input.ts -vcodec copy -acodec copy -ss 00:00:00 -t 00:30:00 \
-sn test3.mkv -vcodec copy -acodec copy -ss 00:30:00 -t 01:00:00 -sn test4.mkv
Which outputs...
Two commands
real 0m16.201s
user 0m1.830s
sys 0m1.301s
real 0m43.621s
user 0m4.943s
sys 0m2.908s
One command
real 0m59.410s
user 0m5.577s
sys 0m3.939s
I tested a SD & HD file, after a few runs & a little maths.
Two commands SD 0m53.94 #2 wins
One command SD 0m49.63
Two commands SD 0m55.00
One command SD 0m52.26 #1 wins
Two commands SD 0m58.60 #2 wins
One command SD 0m58.61
Two commands SD 0m54.60
One command SD 0m50.51 #1 wins
Two commands SD 0m53.94
One command SD 0m49.63 #1 wins
Two commands SD 0m55.00
One command SD 0m52.26 #1 wins
Two commands SD 0m58.71
One command SD 0m58.61 #1 wins
Two commands SD 0m54.63
One command SD 0m50.51 #1 wins
Two commands SD 1m6.67s #2 wins
One command SD 1m20.18
Two commands SD 1m7.67
One command SD 1m6.72 #1 wins
Two commands SD 1m4.92
One command SD 1m2.24 #1 wins
Two commands SD 1m1.73
One command SD 0m59.72 #1 wins
Two commands HD 4m23.20
One command HD 3m40.02 #1 wins
Two commands SD 1m1.30
One command SD 0m59.59 #1 wins
Two commands HD 3m47.89
One command HD 3m29.59 #1 wins
Two commands SD 0m59.82
One command SD 0m59.41 #1 wins
Two commands HD 3m51.18
One command HD 3m30.79 #1 wins
SD file = 1.35GB DVB transport stream
HD file = 3.14GB DVB transport stream
The single command is better if you are handling HD, it agrees with the manuals comments on using -ss after the input file to do a 'slow seek'. SD files have a negligible difference.
The two command version should be quicker by adding another -ss before the input file for the a 'fast seek' followed by the more accurate slow seek.
As Cody and Brett mentioned, you could reduce VBA slowdown with sensible use of Redim Preserve
. Brett suggested Mod
to do this.
You can also use a user defined Type
and Sub
to do this. Consider my code below:
Public Type dsIntArrayType
eElems() As Integer
eSize As Integer
End Type
Public Sub PushBackIntArray( _
ByRef dsIntArray As dsIntArrayType, _
ByVal intValue As Integer)
With dsIntArray
If UBound(.eElems) < (.eSize + 1) Then
ReDim Preserve .eElems(.eSize * 2 + 1)
End If
.eSize = .eSize + 1
.eElems(.eSize) = intValue
End With
End Sub
This calls ReDim Preserve
only when the size has doubled. The member variable eSize
keeps track of the actual data size of eElems
. This approach has helped me improve performance when final array length is not known until run time.
Hope this helps others too.
A side comment here for those using any of these answers. Juba's answer is great, as it's very selective if your variables are either numberic or character strings. If however, you have a combination (e.g. a1, b1, a2, b2) etc. It will not convert the characters properly.
As @Trenton Hoffman notes,
library(dplyr)
df <- mutate_each(df, funs(toupper))
affects both character and factor classes and works for "mixed variables"; e.g. if your variable contains both a character and a numberic value (e.g. a1) both will be converted to a factor. Overall this isn't too much of a concern, but if you end up wanting match data.frames for example
df3 <- df1[df1$v1 %in% df2$v1,]
where df1 has been has been converted and df2 contains a non-converted data.frame or similar, this may cause some problems. The work around is that you briefly have to run
df2 <- df2 %>% mutate_each(funs(toupper), v1)
#or
df2 <- df2 %>% mutate_each(df2, funs(toupper))
#and then
df3 <- df1[df1$v1 %in% df2$v1,]
If you work with genomic data, this is when knowing this can come in handy.
This problem can be solved simply by converting the data to a list
var details = _webcontext.products.ToList();
if (details != null)
{
Parallel.ForEach(details, x =>
{
Products obj = new Products();
obj.slno = x.slno;
obj.ProductName = x.ProductName;
obj.Price = Convert.ToInt32(x.Price);
li.Add(obj);
});
return li;
}
I see that some solutions have been provided but not any causes so I will explain this in detail as I believe it is as important to know what were you doing wrong that just to get "something" that works from the given replies.
First, let's see what Oracle has to say
* <p>The returned array will be "safe" in that no references to it are
* maintained by this list. (In other words, this method must
* allocate a new array even if this list is backed by an array).
* The caller is thus free to modify the returned array.
It may not look important but as you'll see it is... So what does the following line fail? All object in the list are String but it does not convert them, why?
List<String> tList = new ArrayList<String>();
tList.add("4");
tList.add("5");
String tArray[] = (String[]) tList.toArray();
Probably, many of you would think that this code is doing the same, but it does not.
Object tSObjectArray[] = new String[2];
String tStringArray[] = (String[]) tSObjectArray;
When in reality the written code is doing something like this. The javadoc is saying it! It will instatiate a new array, what it will be of Objects!!!
Object tSObjectArray[] = new Object[2];
String tStringArray[] = (String[]) tSObjectArray;
So tList.toArray is instantiating a Objects and not Strings...
Therefore, the natural solution that has not been mentioning in this thread, but it is what Oracle recommends is the following
String tArray[] = tList.toArray(new String[0]);
Hope it is clear enough.
Another option is to trigger the onload
and/or onerror
events by creating an in memory image element and setting its src
attribute to the original src
attribute of the original image. Here's an example of what I mean:
$("<img/>")
.on('load', function() { console.log("image loaded correctly"); })
.on('error', function() { console.log("error loading image"); })
.attr("src", $(originalImage).attr("src"))
;
Hope this helps!
This has changed in Scala 2.8 which has a mechanism for using breaks. You can now do the following:
import scala.util.control.Breaks._
var largest = 0
// pass a function to the breakable method
breakable {
for (i<-999 to 1 by -1; j <- i to 1 by -1) {
val product = i * j
if (largest > product) {
break // BREAK!!
}
else if (product.toString.equals(product.toString.reverse)) {
largest = largest max product
}
}
}
Use Nullable<DateTime>
if possible.
Here is a cool way of ensuring zoom in and zoom out with mouse scroll-wheel in the Eclipse Editor. This one takes inspiration from the solution above from naveed ahmad which was not working for me.
1) First download Autohotkey from http://www.autohotkey.com/ and install it, then run it.
2) Then download tarlog-plugins from https://code.google.com/p/tarlog-plugins/downloads/list
3) Put the downloaded .jar file in the eclipse/plugins folder.
4) Restart Eclipse.
5) Add the following Autohotkey script, save it then reload it (right click on Autohotkey icon in taskbar and click "Reload this script")
; Ctrl + MouseWheel zooming in Eclipse Editor.
; Requires Tarlog plugins (https://code.google.com/p/tarlog-plugins/).
#IfWinActive ahk_class SWT_Window0
^WheelUp:: Send ^{NumpadAdd}
^WheelDown:: Send ^{NumpadSub}
#IfWinActive
And you should be done. You can now zoom in or zoom out with ctrl+mousewheel up and ctrl+mousewheel down. The only caveat is that Autohotkey must be running for this solution to work so ensure that it starts with Windows or run it just before firing Eclipse up. Works fine in Eclipse Kepler and Luna.
you can use window.confirm inside your function combined with if condition
delete(whatever:any){
if(window.confirm('Are sure you want to delete this item ?')){
//put your delete method logic here
}
}
when you call the delete method it will popup a confirmation message and when you press ok it will perform all the logic inside the if condition.
:set tabstop=4
:set shiftwidth=4
:set expandtab
This will insert four spaces instead of a tab character. Spaces are a bit more “stable”, meaning that text indented with spaces will show up the same in the browser and any other application.
Maybe this is useful to anyone in the future, I have implemented a custom Authorize Attribute like this:
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true, Inherited = true)]
public class ClaimAuthorizeAttribute : AuthorizeAttribute, IAuthorizationFilter
{
private readonly string _claim;
public ClaimAuthorizeAttribute(string Claim)
{
_claim = Claim;
}
public void OnAuthorization(AuthorizationFilterContext context)
{
var user = context.HttpContext.User;
if(user.Identity.IsAuthenticated && user.HasClaim(ClaimTypes.Name, _claim))
{
return;
}
context.Result = new ForbidResult();
}
}
It makes it clear that the method accepts null values, and that if you override the method, you should also accept null values.
It also serves as a hint for code analyzers like FindBugs. For example, if such a method dereferences its argument without checking for null first, FindBugs will emit a warning.
You are testing if the values of the variables error
and Already
are present in RepoOutput[RepoName.index(repo)]
. If these variables don't exist then an undefined object is used.
Both of your if
and elif
tests therefore are false; there is no undefined object in the value of RepoOutput[RepoName.index(repo)].
I think you wanted to test if certain strings are in the value instead:
{% if "error" in RepoOutput[RepoName.index(repo)] %}
<td id="error"> {{ RepoOutput[RepoName.index(repo)] }} </td>
{% elif "Already" in RepoOutput[RepoName.index(repo) %}
<td id="good"> {{ RepoOutput[RepoName.index(repo)] }} </td>
{% else %}
<td id="error"> {{ RepoOutput[RepoName.index(repo)] }} </td>
{% endif %}
</tr>
Other corrections I made:
{% elif ... %}
instead of {$ elif ... %}
.</tr>
tag out of the if
conditional structure, it needs to be there always.id
attributeNote that most likely you want to use a class
attribute instead here, not an id
, the latter must have a value that must be unique across your HTML document.
Personally, I'd set the class value here and reduce the duplication a little:
{% if "Already" in RepoOutput[RepoName.index(repo)] %}
{% set row_class = "good" %}
{% else %}
{% set row_class = "error" %}
{% endif %}
<td class="{{ row_class }}"> {{ RepoOutput[RepoName.index(repo)] }} </td>
This is a convention defined in RFC 2045 - Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies.
Private [subtype] values (starting with "X-") may be defined bilaterally between two cooperating agents without outside registration or standardization. Such values cannot be registered or standardized.
New standard values should be registered with IANA as described in RFC 2048.
A similar restriction applies to the top-level type. From the same source,
If another top-level type is to be used for any reason, it must be given a name starting with "X-" to indicate its non-standard status and to avoid a potential conflict with a future official name.
(Note that per RFC 2045, "[m]atching of media type and subtype is ALWAYS case-insensitive", so there's no difference between the interpretation of 'X-' and 'x-'.)
So it's fair to guess that "application/x-foo" was used before the IANA defined "application/foo". And it still might be used by folks who aren't aware of the IANA token assignment.
As Chris Hanson said MIME types are controlled by the IANA. This is detailed in RFC 2048 - Multipurpose Internet Mail Extensions (MIME) Part Four: Registration Procedures. According to RFC 3778, which is cited by the IANA as the definition for "application/pdf",
The application/pdf media type was first registered in 1993 by Paul Lindner for use by the gopher protocol; the registration was subsequently updated in 1994 by Steve Zilles.
The type "application/pdf" has been around for well over a decade. So it seems to me that wherever "application/x-pdf" has been used in new apps, the decision may not have been deliberate.
From the Node.js documentation:
Node provides a tri-directional popen(3) facility through the ChildProcess class.
If your items are wider than the ListBox
, the other answers here won't help: the items in the ItemTemplate
remain wider than the ListBox
.
The fix that worked for me was to disable the horizontal scrollbar, which, apparently, also tells the container of all those items to remain only as wide as the list box.
Hence the combined fix to get ListBox items that are as wide as the list box, whether they are smaller and need stretching, or wider and need wrapping, is as follows:
<ListBox HorizontalContentAlignment="Stretch"
ScrollViewer.HorizontalScrollBarVisibility="Disabled">
<textarea id="editor1" name="editor1">This is sample text</textarea>
<div id="trackingDiv" ></div>
<script type="text/javascript">
CKEDITOR.replace( 'editor1' );
</script>
Let try this..
Update :
To set data :
Create instance First::
var editor = CKEDITOR.instances['editor1'];
Then,
editor.setData('your data');
or
editor.insertHtml('your html data');
or
editor.insertText('your text data');
And Retrieve data from your editor::
editor.getData();
If change the particular para HTML data in CKEditor.
var html = $(editor.editable.$);
$('#id_of_para',html).html('your html data');
These are the possible ways that I know in CKEditor
Not sure what you mean with "class constructor" but I'd assume you mean one of the ones below.
Instance constructor:
Public Sub New()
End Sub
Shared constructor:
Shared Sub New()
End Sub
Dump DB by mongodump
mongodump --host <database-host> -d <database-name> --port <database-port> --out directory
Restore DB by mongorestore
With Index Restore
mongorestore --host <database-host> -d <database-name> --port <database-port> foldername
Without Index Restore
mongorestore --noIndexRestore --host <database-host> -d <database-name> --port <database-port> foldername
Import Single Collection from CSV [1st Column will be treat as Col/Key Name]
mongoimport --db <database-name> --port <database-port> --collection <collection-name> --type csv --headerline --file /path/to/myfile.csv
Import Single Collection from JSON
mongoimport --db <database-name> --port <database-port> --collection <collection-name> --file input.json
you should also be able to use groovy and do what textfinder did
marking a build as un-stable with groovy post-build plugin
if(manager.logContains("Could not login to FTP server")) {
manager.addWarningBadge("FTP Login Failure")
manager.createSummary("warning.gif").appendText("<h1>Failed to login to remote FTP Server!</h1>", false, false, false, "red")
manager.buildUnstable()
}
Also see Groovy Postbuild Plugin
You don't need to set the value at all. ng-model takes care of it all:
Here's the fiddle for this: http://jsfiddle.net/terebentina/9mFpp/
Login as sys user in sql*plus. Then do this query:
select * from v$version;
or
select * from product_component_version;
Well any Javascript object functions sort-of like a "map"
randomObject['hello'] = 'world';
Typically people build simple objects for the purpose:
var myMap = {};
// ...
myMap[newKey] = newValue;
edit — well the problem with having an explicit "put" function is that you'd then have to go to pains to avoid having the function itself look like part of the map. It's not really a Javascripty thing to do.
13 Feb 2014 — modern JavaScript has facilities for creating object properties that aren't enumerable, and it's pretty easy to do. However, it's still the case that a "put" property, enumerable or not, would claim the property name "put" and make it unavailable. That is, there's still only one namespace per object.
The following answer could be helpful for the first part of your question:
From https://spark.apache.org/docs/1.5.1/api/java/org/apache/spark/sql/DataFrame.html, use join
:
Inner equi-join with another DataFrame using the given column.
PersonDf.join(ProfileDf,$"personId")
OR
PersonDf.join(ProfileDf,PersonDf("personId") === ProfileDf("personId"))
Update:
You can also save the DFs
as temp table using df.registerTempTable("tableName")
and you can write sql queries using sqlContext
.
Very simple, you create an array containing zeros using the reference shape:
result = np.zeros(b.shape)
# actually you can also use result = np.zeros_like(b)
# but that also copies the dtype not only the shape
and then insert the array where you need it:
result[:a.shape[0],:a.shape[1]] = a
and voila you have padded it:
print(result)
array([[ 1., 1., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 1., 0.],
[ 0., 0., 0., 0., 0., 0.]])
You can also make it a bit more general if you define where your upper left element should be inserted
result = np.zeros_like(b)
x_offset = 1 # 0 would be what you wanted
y_offset = 1 # 0 in your case
result[x_offset:a.shape[0]+x_offset,y_offset:a.shape[1]+y_offset] = a
result
array([[ 0., 0., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1.]])
but then be careful that you don't have offsets bigger than allowed. For x_offset = 2
for example this will fail.
If you have an arbitary number of dimensions you can define a list of slices to insert the original array. I've found it interesting to play around a bit and created a padding function that can pad (with offset) an arbitary shaped array as long as the array and reference have the same number of dimensions and the offsets are not too big.
def pad(array, reference, offsets):
"""
array: Array to be padded
reference: Reference array with the desired shape
offsets: list of offsets (number of elements must be equal to the dimension of the array)
"""
# Create an array of zeros with the reference shape
result = np.zeros(reference.shape)
# Create a list of slices from offset to offset + shape in each dimension
insertHere = [slice(offset[dim], offset[dim] + array.shape[dim]) for dim in range(a.ndim)]
# Insert the array in the result at the specified offsets
result[insertHere] = a
return result
And some test cases:
import numpy as np
# 1 Dimension
a = np.ones(2)
b = np.ones(5)
offset = [3]
pad(a, b, offset)
# 3 Dimensions
a = np.ones((3,3,3))
b = np.ones((5,4,3))
offset = [1,0,0]
pad(a, b, offset)
For exact values do it like that:
public function DistAB()
{
$delta_lat = $this->lat_b - $this->lat_a ;
$delta_lon = $this->lon_b - $this->lon_a ;
$a = pow(sin($delta_lat/2), 2);
$a += cos(deg2rad($this->lat_a9)) * cos(deg2rad($this->lat_b9)) * pow(sin(deg2rad($delta_lon/29)), 2);
$c = 2 * atan2(sqrt($a), sqrt(1-$a));
$distance = 2 * $earth_radius * $c;
$distance = round($distance, 4);
$this->measure = $distance;
}
Hmm I think that should do it...
Edit:
For formulars and at least JS-implementations try: http://www.movable-type.co.uk/scripts/latlong.html
Dare me... I forgot to deg2rad all the values in the circle-functions...
Disabling Use command-line client from the settings on IntelliJ Ultimate 14.0.3 works for me.
I checked IDEA's document, IDEA don't need a SVN client software anymore. see below description from https://www.jetbrains.com/idea/help/using-subversion-integration.html
=================================================================
Prerequisites
IntelliJ IDEA comes bundled with Subversion plugin. This plugin is turned on by default. If it is not, make sure that the plugin is enabled. IntelliJ IDEA's Subversion integration does not require a standalone Subversion client. All you need is an account in your Subversion repository. Subversion integration is enabled for the current project root or directory.
==================================================================
I had a thought that you might be able to use:
br:after {
content: ".";
visibility: hidden;
display: block;
}
But that didn't work on chrome or firefox.
Just thought I'd mention that in case it occurred to anyone else and I'd save them the trouble.
Firstly name
isn't a valid attribute of an option
element. Instead you could use a data
parameter, like this:
<option value="foo" data-name="bar">Foo Bar</option>
The main issue you have is that the JS is looking at the name
attribute of the select
element, not the chosen option. Try this:
$('#band_type_choices').on('change', function() {
$('.checkboxlist').hide();
$('#checkboxlist_' + $('option:selected', this).data("name")).css("display", "block");
});
Note the option:selected
selector within the context of the select
which raised the change event.
Here's a Swift implementation. You can make an extension so that it will be usable by other views if you like.
extension UIView {
func addBorderAndColor(color: UIColor, width: CGFloat, corner_radius: CGFloat = 0, clipsToBounds: Bool = false) {
self.layer.borderWidth = width
self.layer.borderColor = color.cgColor
self.layer.cornerRadius = corner_radius
self.clipsToBounds = clipsToBounds
}
}
Call this like:
email.addBorderAndColor(color: UIColor.white, width: 0.5, corner_radius: 5, clipsToBounds: true)
.
Should have checked, Once I included the jQuery UI Library it worked fine and was animating...
You can store the SVG in a variable. Then manipulate the SVG string depending on your needs (i.e., set width, height, color, etc). Then use the result to set the background, e.g.
$circle-icon-svg: '<svg xmlns="http://www.w3.org/2000/svg"><circle cx="10" cy="10" r="10" /></svg>';
$icon-color: #f00;
$icon-color-hover: #00f;
@function str-replace($string, $search, $replace: '') {
$index: str-index($string, $search);
@if $index {
@return str-slice($string, 1, $index - 1) + $replace + str-replace(str-slice($string, $index + str-length($search)), $search, $replace);
}
@return $string;
}
@function svg-fill ($svg, $color) {
@return str-replace($svg, '<svg', '<svg fill="#{$color}"');
}
@function svg-size ($svg, $width, $height) {
$svg: str-replace($svg, '<svg', '<svg width="#{$width}"');
$svg: str-replace($svg, '<svg', '<svg height="#{$height}"');
@return $svg;
}
.icon {
$icon-svg: svg-size($circle-icon-svg, 20, 20);
width: 20px; height: 20px; background: url('data:image/svg+xml;utf8,#{svg-fill($icon-svg, $icon-color)}');
&:hover {
background: url('data:image/svg+xml;utf8,#{svg-fill($icon-svg, $icon-color-hover)}');
}
}
I have made a demo too, http://sassmeister.com/gist/4cf0265c5d0143a9e734.
This code makes a few assumptions about the SVG, e.g. that <svg />
element does not have an existing fill colour and that neither width or height properties are set. Since the input is hardcoded in the SCSS document, it is quite easy to enforce these constraints.
Do not worry about the code duplication. gzip compression makes the difference negligible.
Find your IP address and replace where ever you see 127.0.0.1 with your workstation IP address you get from the link above.
. . .
Require ip your_workstation_IP_address
. . .
Allow from your_workstation_IP_address
. . .
Require ip your_workstation_IP_address
. . .
Allow from your_workstation_IP_address
. . .
and in the end don't forget to restart the server
sudo systemctl restart httpd.service
If you want to display some buttons with actions when an list item is swipe are a lot of libraries on the internet that have this behavior. I implemented the library that I found on the internet and I am very satisfied. It is very simple to use and very quick. I improved the original library and I added a new click listener for item click. Also I added font awesome library (http://fortawesome.github.io/Font-Awesome/) and now you can simply add a new item title and specify the icon name from font awesome.
Here is the github link
I use the following function for delaying repeated actions, it will work for your case:
var delay = (function(){
var timer = 0;
return function(callback, ms){
clearTimeout (timer);
timer = setTimeout(callback, ms);
};
})();
Usage:
$(window).resize(function() {
delay(function(){
alert('Resize...');
//...
}, 500);
});
The callback function passed to it, will execute only when the last call to delay has been made after the specified amount of time, otherwise a timer will be reset, I find this useful for other purposes like detecting when the user stopped typing, etc...
Here is how the default implementation (ASP.NET Framework or ASP.NET Core) works. It uses a Key Derivation Function with random salt to produce the hash. The salt is included as part of the output of the KDF. Thus, each time you "hash" the same password you will get different hashes. To verify the hash the output is split back to the salt and the rest, and the KDF is run again on the password with the specified salt. If the result matches to the rest of the initial output the hash is verified.
Hashing:
public static string HashPassword(string password)
{
byte[] salt;
byte[] buffer2;
if (password == null)
{
throw new ArgumentNullException("password");
}
using (Rfc2898DeriveBytes bytes = new Rfc2898DeriveBytes(password, 0x10, 0x3e8))
{
salt = bytes.Salt;
buffer2 = bytes.GetBytes(0x20);
}
byte[] dst = new byte[0x31];
Buffer.BlockCopy(salt, 0, dst, 1, 0x10);
Buffer.BlockCopy(buffer2, 0, dst, 0x11, 0x20);
return Convert.ToBase64String(dst);
}
Verifying:
public static bool VerifyHashedPassword(string hashedPassword, string password)
{
byte[] buffer4;
if (hashedPassword == null)
{
return false;
}
if (password == null)
{
throw new ArgumentNullException("password");
}
byte[] src = Convert.FromBase64String(hashedPassword);
if ((src.Length != 0x31) || (src[0] != 0))
{
return false;
}
byte[] dst = new byte[0x10];
Buffer.BlockCopy(src, 1, dst, 0, 0x10);
byte[] buffer3 = new byte[0x20];
Buffer.BlockCopy(src, 0x11, buffer3, 0, 0x20);
using (Rfc2898DeriveBytes bytes = new Rfc2898DeriveBytes(password, dst, 0x3e8))
{
buffer4 = bytes.GetBytes(0x20);
}
return ByteArraysEqual(buffer3, buffer4);
}
In HTML, SGML and XML, (1) attributes cannot be repeated, and should only be defined in an element once.
So your example:
<span style="color:blue" style="font-style:italic">Test</span>
is non-conformant to the HTML standard, and will result in undefined behaviour, which explains why different browsers are rendering it differently.
Since there is no defined way to interpret this, browsers can interpret it however they want and merge them, or ignore them as they wish.
(1): Every article I can find states that attributes are "key/value" pairs or "attribute-value" pairs, heavily implying the keys must be unique. The best source I can find states:
Attribute names (id and status in this example) are subject to the same restrictions as other names in XML; they need not be unique across the whole DTD, however, but only within the list of attributes for a given element. (Emphasis mine.)
This post shows a couple of ways of controlling the session timeout: http://bytes.com/topic/php/insights/889606-setting-timeout-php-sessions
IMHO the second option is a nice solution:
<?php
/***
* Starts a session with a specific timeout and a specific GC probability.
* @param int $timeout The number of seconds until it should time out.
* @param int $probability The probablity, in int percentage, that the garbage
* collection routine will be triggered right now.
* @param strint $cookie_domain The domain path for the cookie.
*/
function session_start_timeout($timeout=5, $probability=100, $cookie_domain='/') {
// Set the max lifetime
ini_set("session.gc_maxlifetime", $timeout);
// Set the session cookie to timout
ini_set("session.cookie_lifetime", $timeout);
// Change the save path. Sessions stored in teh same path
// all share the same lifetime; the lowest lifetime will be
// used for all. Therefore, for this to work, the session
// must be stored in a directory where only sessions sharing
// it's lifetime are. Best to just dynamically create on.
$seperator = strstr(strtoupper(substr(PHP_OS, 0, 3)), "WIN") ? "\\" : "/";
$path = ini_get("session.save_path") . $seperator . "session_" . $timeout . "sec";
if(!file_exists($path)) {
if(!mkdir($path, 600)) {
trigger_error("Failed to create session save path directory '$path'. Check permissions.", E_USER_ERROR);
}
}
ini_set("session.save_path", $path);
// Set the chance to trigger the garbage collection.
ini_set("session.gc_probability", $probability);
ini_set("session.gc_divisor", 100); // Should always be 100
// Start the session!
session_start();
// Renew the time left until this session times out.
// If you skip this, the session will time out based
// on the time when it was created, rather than when
// it was last used.
if(isset($_COOKIE[session_name()])) {
setcookie(session_name(), $_COOKIE[session_name()], time() + $timeout, $cookie_domain);
}
}
I did test your code and the only problem I could see was the lack of permission given to the directory you try to write the file in to.
Give "write" permission to the directory you need to put the file. In your case it is the current directory.
Use "chmod" in linux.
Add "Everyone" with "write" enabled to the security tab of the directory if you are in Windows.
Edit: DTing's answer is more applicable to your question since it uses PIL, but I'll leave this up in case you want to know how to do it in numpy.
Here is a numpy/matplotlib solution that should work for N images (only color images) of any size/shape.
import numpy as np
import matplotlib.pyplot as plt
def concat_images(imga, imgb):
"""
Combines two color image ndarrays side-by-side.
"""
ha,wa = imga.shape[:2]
hb,wb = imgb.shape[:2]
max_height = np.max([ha, hb])
total_width = wa+wb
new_img = np.zeros(shape=(max_height, total_width, 3))
new_img[:ha,:wa]=imga
new_img[:hb,wa:wa+wb]=imgb
return new_img
def concat_n_images(image_path_list):
"""
Combines N color images from a list of image paths.
"""
output = None
for i, img_path in enumerate(image_path_list):
img = plt.imread(img_path)[:,:,:3]
if i==0:
output = img
else:
output = concat_images(output, img)
return output
Here is example use:
>>> images = ["ronda.jpeg", "rhod.jpeg", "ronda.jpeg", "rhod.jpeg"]
>>> output = concat_n_images(images)
>>> import matplotlib.pyplot as plt
>>> plt.imshow(output)
>>> plt.show()
While Building a Project, if in the project properties it shows that it is build under Target.NET Framework 4.5, update it to 4.6 or 4.6.1. Then the build will be able to locate Entity Framework 6.0 in the Web.config file. This approach solved my problem. Selecting Target framework from Project Properties
Check the column collation. This script might change the collation to the table default. Add the current collation to the script.
To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.
I'm not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.
So this step installs the ftp client (VSFTP)
> yum install vsftpd
These steps configure the ftp client
> vi /etc/vsftpd/vsftpd.conf
-- Add following lines at the end of file --
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
> /etc/init.d/vsftpd restart
but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048
Import gives you access to other modules in your program. You can't decide to import a text file. If you want to read from a file that's in the same directory, you can look at this. Here's another StackOverflow post about it.
I think the Vim documentation should've explained the meaning behind the naming of these commands. Just telling you what they do doesn't help you remember the names.
map
is the "root" of all recursive mapping commands. The root form applies to "normal", "visual+select", and "operator-pending" modes. (I'm using the term "root" as in linguistics.)
noremap
is the "root" of all non-recursive mapping commands. The root form applies to the same modes as map
. (Think of the nore
prefix to mean "non-recursive".)
(Note that there are also the !
modes like map!
that apply to insert & command-line.)
See below for what "recursive" means in this context.
Prepending a mode letter like n
modify the modes the mapping works in. It can choose a subset of the list of applicable modes (e.g. only "visual"), or choose other modes that map
wouldn't apply to (e.g. "insert").
Use help map-modes
will show you a few tables that explain how to control which modes the mapping applies to.
Mode letters:
n
: normal onlyv
: visual and selecto
: operator-pendingx
: visual onlys
: select onlyi
: insertc
: command-linel
: insert, command-line, regexp-search (and others. Collectively called "Lang-Arg" pseudo-mode)"Recursive" means that the mapping is expanded to a result, then the result is expanded to another result, and so on.
The expansion stops when one of these is true:
At that point, Vim's default "meaning" of the final result is applied/executed.
"Non-recursive" means the mapping is only expanded once, and that result is applied/executed.
Example:
nmap K H
nnoremap H G
nnoremap G gg
The above causes K
to expand to H
, then H
to expand to G
and stop. It stops because of the nnoremap
, which expands and stops immediately. The meaning of G
will be executed (i.e. "jump to last line"). At most one non-recursive mapping will ever be applied in an expansion chain (it would be the last expansion to happen).
The mapping of G
to gg
only applies if you press G
, but not if you press K
. This mapping doesn't affect pressing K
regardless of whether G
was mapped recursively or not, since it's line 2 that causes the expansion of K
to stop, so line 3 wouldn't be used.
This is how you should be using mysql_fetch_assoc():
$result = mysql_query($query);
while ($row = mysql_fetch_assoc($result)) {
// Do stuff with $row
}
$result should be a resource. Even if the query returns no rows, $result is still a resource. The only time $result is a boolean value, is if there was an error when querying the database. In which case, you should find out what that error is by using mysql_error() and ensure that it can't happen. Then you don't have to hide from any errors.
You should always cover the base that errors may happen by doing:
if (!$result) {
die(mysql_error());
}
At least then you'll be more likely to actually fix the error, rather than leave the users with a glaring ugly error in their face.
I Had the same problem. This worked for me:
CASE WHEN length(sp.tele_phone_number) = 10 THEN
SUBSTR(sp.tele_phone_number,4)
Best to dump to a compressed file
mysqldump --no-create-info -u username -hhostname -p dbname | gzip > /backupsql.gz
and to restore using pv apt-get install pv
to monitor progress
pv backupsql.gz | gunzip | mysql -uusername -hhostip -p dbname
on Mac:
~/Library/Preferences/SmartGit/19.1
preferences.yml
file and just comment listx
line Except the way described in the approved answer, there's another way if you have single native libs in your project.
java.library.path
.Answered this just to emphasize @rblarsen comment on question :
You don't need the style tags in the CSS-file
If you remove the style tag from your css file it will work.
Based on my Comment here is one way to get what you want done:
Start byt selecting any cell in your range and Press Ctrl + T
This will give you this pop up:
make sure the Where is your table text is correct and click ok you will now have:
Now If you add a column header in D it will automatically be added to the table all the way to the last row:
Now If you enter a formula into this column:
After you enter it, the formula will be auto filled all the way to last row:
Now if you add a new row at the next row under your table:
Once entered it will be resized to the width of your table and all columns with formulas will be added also:
Hope this solves your problem!
You can do this with git rebase -i
, passing in the revision that you want to use as the 'root':
git rebase -i origin/master
will open an editor window showing all of the commits you have made after the last commit in origin/master
. You can reject commits, squash commits into a single commit, or edit previous commits.
There are a few resources that can probably explain this in a better way, and show some other examples:
http://book.git-scm.com/4_interactive_rebasing.html
and
http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html
are the first two good pages I could find.
Well as simple, the answer according to me will be
first = list(prices)[0]
converting the dictionary to list will output the keys and we will select the first key from the list.
Use GregorianCalendar
. Set the date of the object, and then use getActualMaximum(Calendar.DAY_IN_MONTH)
.
http://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html#getActualMaximum%28int%29 (but it was the same in Java 1.4)
Dragging like jQueryUI: JsFiddle
You can drag the element from any point without weird centering.
$(document).ready(function() {
var $body = $('body');
var $target = null;
var isDraggEnabled = false;
$body.on("mousedown", "div", function(e) {
$this = $(this);
isDraggEnabled = $this.data("draggable");
if (isDraggEnabled) {
if(e.offsetX==undefined){
x = e.pageX-$(this).offset().left;
y = e.pageY-$(this).offset().top;
}else{
x = e.offsetX;
y = e.offsetY;
};
$this.addClass('draggable');
$body.addClass('noselect');
$target = $(e.target);
};
});
$body.on("mouseup", function(e) {
$target = null;
$body.find(".draggable").removeClass('draggable');
$body.removeClass('noselect');
});
$body.on("mousemove", function(e) {
if ($target) {
$target.offset({
top: e.pageY - y,
left: e.pageX - x
});
};
});
});
Since I cannot comment, adding this note in addition to @jethro answer.
I found out that you also need to do the FOREIGN KEY line as the last part of create the table statement, otherwise you will get a syntax error when installing your app. What I mean is, you cannot do something like this:
private static final String TASK_TABLE_CREATE = "create table "
+ TASK_TABLE + " (" + TASK_ID
+ " integer primary key autoincrement, " + TASK_TITLE
+ " text not null, " + TASK_NOTES + " text not null, "
+ TASK_CAT + " integer,"
+ " FOREIGN KEY ("+TASK_CAT+") REFERENCES "+CAT_TABLE+" ("+CAT_ID+"), "
+ TASK_DATE_TIME + " text not null);";
Where I put the TASK_DATE_TIME after the foreign key line.
You can use following formulas.
For Excel 2007 or later:
=IFERROR(VLOOKUP(D3,List!A:C,3,FALSE),"No Match")
For Excel 2003:
=IF(ISERROR(MATCH(D3,List!A:A, 0)), "No Match", VLOOKUP(D3,List!A:C,3,FALSE))
Note, that
List!A:C
in VLOOKUP
and returns value from column ? 3
VLOOKUP
equals to FALSE
, in that case VLOOKUP
will only find an exact match, and the values in the first column of List!A:C
do not need to be sorted (opposite to case when you're using TRUE
).If this SQL is being used in any peoplesoft specific code (Application Engine, SQLEXEC, SQLfetch, etc..) you could use %Datein metaSQL. Peopletools automatically converts the date to a format which would be accepted by the database platform the application is running on.
In case this SQL is being used to perform a backend update from a query analyzer (like SQLDeveloper, SQLTools), the date format that is being used is wrong. Oracle expects the date format to be DD-MMM-YYYY, where MMM could be JAN, FEB, MAR, etc..
In addition to @chanafdo answer, you can use route name
<a href="{{route('login')}}">login here</a>
with parameter in route name
when go to url like URI: profile/{id}
<a href="{{route('profile', ['id' => 1])}}">login here</a>
<a href="<?php echo route('login')?>">login here</a>
with parameter in route name
when go to url like URI: profile/{id}
<a href="<?php echo route('profile', ['id' => 1])?>">login here</a>
As of laravel 5.2 you can use @php @endphp
to create as <?php ?>
in laravel blade.
Using blade your personal opinion but I suggest to use it. Learn it.
It has many wonderful features as template inheritance, Components & Slots,subviews etc...
Disclaimer: I am the author of Jsonix, a powerful open-source XML<->JSON JavaScript mapping library.
Today I've released the new version of the Jsonix Schema Compiler, with the new JSON Schema generation feature.
Let's take the Purchase Order schema for example. Here's a fragment:
<xsd:element name="purchaseOrder" type="PurchaseOrderType"/>
<xsd:complexType name="PurchaseOrderType">
<xsd:sequence>
<xsd:element name="shipTo" type="USAddress"/>
<xsd:element name="billTo" type="USAddress"/>
<xsd:element ref="comment" minOccurs="0"/>
<xsd:element name="items" type="Items"/>
</xsd:sequence>
<xsd:attribute name="orderDate" type="xsd:date"/>
</xsd:complexType>
You can compile this schema using the provided command-line tool:
java -jar jsonix-schema-compiler-full.jar
-generateJsonSchema
-p PO
schemas/purchaseorder.xsd
The compiler generates Jsonix mappings as well the matching JSON Schema.
Here's what the result looks like (edited for brevity):
{
"id":"PurchaseOrder.jsonschema#",
"definitions":{
"PurchaseOrderType":{
"type":"object",
"title":"PurchaseOrderType",
"properties":{
"shipTo":{
"title":"shipTo",
"allOf":[
{
"$ref":"#/definitions/USAddress"
}
]
},
"billTo":{
"title":"billTo",
"allOf":[
{
"$ref":"#/definitions/USAddress"
}
]
}, ...
}
},
"USAddress":{ ... }, ...
},
"anyOf":[
{
"type":"object",
"properties":{
"name":{
"$ref":"http://www.jsonix.org/jsonschemas/w3c/2001/XMLSchema.jsonschema#/definitions/QName"
},
"value":{
"$ref":"#/definitions/PurchaseOrderType"
}
},
"elementName":{
"localPart":"purchaseOrder",
"namespaceURI":""
}
}
]
}
Now this JSON Schema is derived from the original XML Schema. It is not exactly 1:1 transformation, but very very close.
The generated JSON Schema matches the generatd Jsonix mappings. So if you use Jsonix for XML<->JSON conversion, you should be able to validate JSON with the generated JSON Schema. It also contains all the required metadata from the originating XML Schema (like element, attribute and type names).
Disclaimer: At the moment this is a new and experimental feature. There are certain known limitations and missing functionality. But I'm expecting this to manifest and mature very fast.
Links:
npm install
Interpreting newlines as <br />
used to be a feature of Github-flavored markdown, but the most recent help document no longer lists this feature.
Fortunately, you can do it manually. The easiest way is to ensure that each line ends with two spaces. So, change
a
b
c
into
a__
b__
c
(where _
is a blank space).
Or, you can add explicit <br />
tags.
a <br />
b <br />
c
You could also check my md5 implementation. It should be approx. the same as the other posted above. Unfortunately, the performance is limited by the inner loop which is impossible to optimize more.
The length of the arrays must be the same and the numbers just be the same throughout(1st number in arrays must be the sasme and so on)
Based on this comment, then you already have your algorithm:
Check if both arrays have the same length:
array1.length == array2.length
The numbers must be the same in the same position:
array1[x] == array2[x]
Knowing this, you can create your code like this (this is not Java code, it's an algorithm):
function compareArrays(int[] array1, int[] array2) {
if (array1 == null) return false
if (array2 == null) return false
if array1.length != array2.length then return false
for i <- 0 to array1.length - 1
if array1[i] != array2[i] return false
return true
}
Note: your function should return a boolean
, not being a void
, then recover the return value in another variable and use it to print the message "true" or "false":
public static void main(String[] args) {
int[] array1;
int[] array2;
//initialize the arrays...
//fill the arrays with items...
//call the compare function
boolean arrayEquality = compareArrays(array1, array2);
if (arrayEquality) {
System.out.println("arrays are equals");
} else {
System.out.println("arrays are not equals");
}
}
The problem is that HTML doesn't recognize the escape character. You could work around that by using the single quotes for the HTML attribute and the double quotes for the onclick.
<a href="#" onclick='DoEdit("Preliminary Assessment \"Mini\""); return false;'>edit</a>
How I load Icons: Using Visual Studio 2010: Go to the project properties, click Add Resource > Existing File, select your Icon.
You'll see that a Resources folder appeared. This was my problem, I had to click the loaded icon (in Resources directory), and set "Copy to Output Directory" to "Copy always". (was set "Do not copy").
Now simply do:
Icon myIcon = new Icon("Resources/myIcon.ico");
// Replace all occurrences of searchStr in str with replacer
// Each match is replaced only once to prevent an infinite loop
// The algorithm iterates once over the input and only concatenates
// to the output, so it should be reasonably efficient
std::string replace(const std::string& str, const std::string& searchStr,
const std::string& replacer)
{
// Prevent an infinite loop if the input is empty
if (searchStr == "") {
return str;
}
std::string result = "";
size_t pos = 0;
size_t pos2 = str.find(searchStr, pos);
while (pos2 != std::string::npos) {
result += str.substr(pos, pos2-pos) + replacer;
pos = pos2 + searchStr.length();
pos2 = str.find(searchStr, pos);
}
result += str.substr(pos, str.length()-pos);
return result;
}
You simply need to specify on which tables to apply the DELETE
.
Delete only the deadline
rows:
DELETE `deadline` FROM `deadline` LEFT JOIN `job` ....
Delete the deadline
and job
rows:
DELETE `deadline`, `job` FROM `deadline` LEFT JOIN `job` ....
Delete only the job
rows:
DELETE `job` FROM `deadline` LEFT JOIN `job` ....
Change the segue identifier in the right panel in the section with an id. icon to match the string you used in your conditional.
Did you make sure that tools.jar made it on the compile class path? Maybe the path is incorrect.
task debug << {
configurations.compile.each { println it }
}
Just like any other collection. With the addition of the List<T>.ForEach
method.
foreach (var item in myMoney)
Console.WriteLine("amount is {0}, and type is {1}", item.amount, item.type);
for (int i = 0; i < myMoney.Count; i++)
Console.WriteLine("amount is {0}, and type is {1}", myMoney[i].amount, myMoney[i].type);
myMoney.ForEach(item => Console.WriteLine("amount is {0}, and type is {1}", item.amount, item.type));
As of data.table v1.13.0 you can use the function fcase()
(fast-case) to do SQL-like CASE
operations (also similar to dplyr::case_when()
):
require(data.table)
dt <- data.table(name = c('cow','pig','eagle','pigeon','cow','eagle'))
dt[ , category := fcase(name %in% c('cow', 'pig'), 'mammal',
name %in% c('eagle', 'pigeon'), 'bird') ]
In a POST method, you can put an array. However, in a PUT method, you should use http_build_query
to build the params like this:
curl_setopt( $ch, CURLOPT_POSTFIELDS, http_build_query( $postArr ) );
just do:
type version.txt
and it will be displayed as if you typed:
set /p Build=<version.txt
echo %Build%
Your code is correct you just used .div
instead of div
HTML
<div class="ui grid container">
<div class="ui center aligned three column grid">
<div class="column">
</div>
<div class="column">
</div>
</div>
CSS
div{
position: absolute;
top: 50%;
left: 50%;
margin-top: -50px;
margin-left: -50px;
width: 100px;
height: 100px;
}
Check out this Fiddle
git clone your-url local-dir
to checkout source code;
git pull
to update source code in local-dir;
In addition to the methods described by @ctacke, you can also open SQL Server Compact Edition databases with SQL Server Management Studio. You'll need SQL Server 2008 to open SQL CE 3.5 databases.
I struggled with the same problem just a minute ago. My project was generated using the v 1.6.0 of angular-cli.
npm update -g @angular/cli editing my package.json changing the line "@angular/cli": "1.6.0", to "@angular/cli": "^1.6.0", npm update did the trick.
What about this?
function replaceLast(x, y, z){
var a = x.split("");
a[x.lastIndexOf(y)] = z;
return a.join("");
}
replaceLast("Hello world!", "l", "x"); // Hello worxd!
<!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.1/jquery.min.js"></script>
<script>
$( document ).ready(function() {
$this = $('h1');
$this.css('color','#3498db');
$this.css('text-align','center');
$this.css('border','1px solid #ededed');
});
</script>
</head>
<body>
<h1>Title</h1>
</body>
</html>
If you want to use a bash script and keep it easy to read recommend putting the data in structured JSON, and then use lightweight tool jq in your bash command to iterate through the array. For example with the following dataset:
[
{"specialId":"123",
"specialName":"First"},
{"specialId":"456",
"specialName":"Second"},
{"specialId":"789",
"specialName":"Third"}
]
You can iterate through this data with a bash script and jq like this:
function loopOverArray(){
jq -c '.[]' testing.json | while read i; do
# Do stuff here
echo "$i"
done
}
loopOverArray
Outputs:
{"specialId":"123","specialName":"First"}
{"specialId":"456","specialName":"Second"}
{"specialId":"789","specialName":"Third"}
Your format string is wrong. Change it to
insert = DateTime.ParseExact(line[i], "M/d/yyyy hh:mm", CultureInfo.InvariantCulture);
It is XOR. Another name for the XOR function is addition without carry. I suppose that's how the symbol might make sense.
If you are using the Support Library provided DrawerLayout as suggested in the Creating a navigation drawer training, you can use the newly added android.support.v7.app.ActionBarDrawerToggle (note: different from the now deprecated android.support.v4.app.ActionBarDrawerToggle):
shows a Hamburger icon when drawer is closed and an arrow when drawer is open. It animates between these two states as the drawer opens.
While the training hasn't been updated to take the deprecation/new class into account, you should be able to use it almost exactly the same code - the only difference in implementing it is the constructor.
By experiment I was able to observe this:
When SQL Profiler 2005 or SQL Profiler 2000 is used with database residing in SQLServer 2000 - problem mentioned problem persists, but when SQL Profiler 2005 is used with SQLServer 2005 database, it works perfect!
In Summary, the issue seems to be prevalent in SQLServer 2000 & rectified in SQLServer 2005.
The solution for the issue when dealing with SQLServer 2000 is (as explained by wearejimbo)
Identify the DatabaseID of the database you want to filter by querying the sysdatabases table as below
SELECT *
FROM master..sysdatabases
WHERE name like '%your_db_name%' -- Remove this line to see all databases
ORDER BY dbid
Use the DatabaseID Filter (instead of DatabaseName) in the New Trace window of SQL Profiler 2000
Try lodash sortBy
import * as _ from "lodash";
_.sortBy(data.applications,"id").map(application => (
console.log("application")
)
)
Read more : lodash.sortBy
The path to the nginx.conf
file which is the primary Configuration file for Nginx - which is also the file which shall INCLUDE the Path for other Nginx Config files as and when required is /etc/nginx/nginx.conf
.
You may access and edit this file by typing this at the terminal
cd /etc/nginx
/etc/nginx$ sudo nano nginx.conf
Further in this file you may Include other files - which can have a SERVER directive as an independent SERVER BLOCK - which need not be within the HTTP or HTTPS blocks, as is clarified in the accepted answer above.
I repeat - if you need a SERVER BLOCK to be defined within the PRIMARY Config file itself than that SERVER BLOCK will have to be defined within an enclosing HTTP or HTTPS block in the /etc/nginx/nginx.conf
file which is the primary Configuration file for Nginx.
Also note -its OK if you define , a SERVER BLOCK directly not enclosing it within a HTTP or HTTPS block , in a file located at path /etc/nginx/conf.d
. Also to make this work you will need to include the path of this file in the PRIMARY Config file as seen below :-
http{
include /etc/nginx/conf.d/*.conf; #includes all files of file type.conf
}
Further to this you may comment out from the PRIMARY Config file , the line
http{
#include /etc/nginx/sites-available/some_file.conf; # Comment Out
include /etc/nginx/conf.d/*.conf; #includes all files of file type.conf
}
and need not keep any Config Files in /etc/nginx/sites-available/
and also no need to SYMBOLIC Link them to /etc/nginx/sites-enabled/
, kindly note this works for me - in case anyone think it doesnt for them or this kind of config is illegal etc etc , pls do leave a comment so that i may correct myself - thanks .
EDIT :- According to the latest version of the Official Nginx CookBook , we need not create any Configs within - /etc/nginx/sites-enabled/
, this was the older practice and is DEPRECIATED now .
Thus No need for the INCLUDE DIRECTIVE include /etc/nginx/sites-available/some_file.conf;
.
Quote from Nginx CookBook page - 5 .
"In some package repositories, this folder is named sites-enabled, and configuration files are linked from a folder named site-available; this convention is depre- cated."
/usr/local/lib/node_modules
is the correct directory for globally installed node modules.
/usr/local/share/npm/lib/node_modules
makes no sense to me. One issue here is that you're confused because there are two directories called node_modules:
/usr/local/lib/node_modules
/usr/local/lib/node_modules/npm/node_modules
The latter seems to be node modules that came with Node, e.g., lodash
, when the former is Node modules that I installed using npm
.
Use strpos()
:
if (strpos($string2, 'http') === 0) {
// It starts with 'http'
}
Remember the three equals signs (===
). It will not work properly if you only use two. This is because strpos()
will return false
if the needle cannot be found in the haystack.
Pass keyword arguments to flask.jsonify
and they will be output as a JSON object.
@app.route('/_get_current_user')
def get_current_user():
return jsonify(
username=g.user.username,
email=g.user.email,
id=g.user.id
)
{
"username": "admin",
"email": "admin@localhost",
"id": 42
}
If you already have a dict, you can pass it directly as jsonify(d)
.
exec does not execute a command in your shell
try
Process p = Runtime.getRuntime().exec(new String[]{"csh","-c","cat /home/narek/pk.txt"});
instead.
EDIT:: I don't have csh on my system so I used bash instead. The following worked for me
Process p = Runtime.getRuntime().exec(new String[]{"bash","-c","ls /home/XXX"});
Using Python will be one easy way to achieve what you want.
I found one using Google.
"convert from json to csv using python" is an example.
Take a look at cavaj.
You should do something like this:
1) create directory object what would point to server-side accessible folder
CREATE DIRECTORY image_files AS '/data/images'
/
2) Place your file into OS folder directory object points to
3) Give required access privileges to Oracle schema what will load data from file into table:
GRANT READ ON DIRECTORY image_files TO scott
/
4) Use BFILENAME, EMPTY_BLOB functions and DBMS_LOB package (example NOT tested - be care) like in below:
DECLARE
l_blob BLOB;
v_src_loc BFILE := BFILENAME('IMAGE_FILES', 'myimage.png');
v_amount INTEGER;
BEGIN
INSERT INTO esignatures
VALUES (100, 'BOB', empty_blob()) RETURN iblob INTO l_blob;
DBMS_LOB.OPEN(v_src_loc, DBMS_LOB.LOB_READONLY);
v_amount := DBMS_LOB.GETLENGTH(v_src_loc);
DBMS_LOB.LOADFROMFILE(l_blob, v_src_loc, v_amount);
DBMS_LOB.CLOSE(v_src_loc);
COMMIT;
END;
/
After this you get the content of your file in BLOB column and can get it back using Java for example.
edit: One letter left missing: it should be LOADFROMFILE.
You can run the MySQL command SHOW FULL PROCESSLIST;
to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT
on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f
on the logfile would do the trick.
Do you want to know if a type is the same type as int64_t or do you want to know if something is 64 bits? Based on your proposed solution, I think you're asking about the latter. In that case, I would do something like
template<typename T>
bool is_64bits() { return sizeof(T) * CHAR_BIT == 64; } // or >= 64
I have been using Identity Impersonate:
<system.web>
<identity impersonate="true" userName="domain\username" password="password"/>
</system.Web>
When pushing up to the server you have to give the username access to the Temporary ASP.NET
Files folder so it can read/write/execute properly:
C:\Windows\Microsoft.NET\"frameworkversion"\"aspversion"\Temporary ASP.NET Files
Obviously replace "frameworkversion" and "aspversion" with the versions you are using.
Break enter Keyword line in Textarea using CSS:
white-space: pre-wrap;
$mail->SMTPOptions = array(
'ssl' => array(
'verify_peer' => false,
'verify_peer_name' => false,
'allow_self_signed' => true
)
);
Another option is the Zend Server Community Edition.
@Bo Persson correctly states in his great answer here:
When passing an array as a parameter, this
void arraytest(int a[])
means exactly the same as
void arraytest(int *a)
However, let me add also that the above two forms also:
mean exactly the same as
void arraytest(int a[0])
which means exactly the same as
void arraytest(int a[1])
which means exactly the same as
void arraytest(int a[2])
which means exactly the same as
void arraytest(int a[1000])
etc.
In every single one of the array examples above, and as shown in the example calls in the code just below, the input parameter type decays to an int *
, and can be called with no warnings and no errors, even with build options -Wall -Wextra -Werror
turned on (see my repo here for details on these 3 build options), like this:
int array1[2];
int * array2 = array1;
// works fine because `array1` automatically decays from an array type
// to `int *`
arraytest(array1);
// works fine because `array2` is already an `int *`
arraytest(array2);
As a matter of fact, the "size" value ([0]
, [1]
, [2]
, [1000]
, etc.) inside the array parameter here is apparently just for aesthetic/self-documentation purposes, and can be any positive integer (size_t
type I think) you want!
In practice, however, you should use it to specify the minimum size of the array you expect the function to receive, so that when writing code it's easy for you to track and verify. The MISRA-C-2012 standard (buy/download the 236-pg 2012-version PDF of the standard for £15.00 here) goes so far as to state (emphasis added):
Rule 17.5 The function argument corresponding to a parameter declared to have an array type shall have an appropriate number of elements.
...
If a parameter is declared as an array with a specified size, the corresponding argument in each function call should point into an object that has at least as many elements as the array.
...
The use of an array declarator for a function parameter specifies the function interface more clearly than using a pointer. The minimum number of elements expected by the function is explicitly stated, whereas this is not possible with a pointer.
In other words, they recommend using the explicit size format, even though the C standard technically doesn't enforce it--it at least helps clarify to you as a developer, and to others using the code, what size array the function is expecting you to pass in.
(Not recommended, but possible. See my brief argument against doing this at the end.)
As @Winger Sendon points out in a comment below my answer, we can force C to treat an array type to be different based on the array size!
First, you must recognize that in my example just above, using the int array1[2];
like this: arraytest(array1);
causes array1
to automatically decay into an int *
. HOWEVER, if you take the address of array1
instead and call arraytest(&array1)
, you get completely different behavior! Now, it does NOT decay into an int *
! Instead, the type of &array1
is int (*)[2]
, which means "pointer to an array of size 2 of int", or "pointer to an array of size 2 of type int", or said also as "pointer to an array of 2 ints". So, you can FORCE C to check for type safety on an array, like this:
void arraytest(int (*a)[2])
{
// my function here
}
This syntax is hard to read, but similar to that of a function pointer. The online tool, cdecl, tells us that int (*a)[2]
means: "declare a as pointer to array 2 of int" (pointer to array of 2 int
s). Do NOT confuse this with the version withOUT parenthesis: int * a[2]
, which means: "declare a as array 2 of pointer to int" (AKA: array of 2 pointers to int
, AKA: array of 2 int*
s).
Now, this function REQUIRES you to call it with the address operator (&
) like this, using as an input parameter a POINTER TO AN ARRAY OF THE CORRECT SIZE!:
int array1[2];
// ok, since the type of `array1` is `int (*)[2]` (ptr to array of
// 2 ints)
arraytest(&array1); // you must use the & operator here to prevent
// `array1` from otherwise automatically decaying
// into `int *`, which is the WRONG input type here!
This, however, will produce a warning:
int array1[2];
// WARNING! Wrong type since the type of `array1` decays to `int *`:
// main.c:32:15: warning: passing argument 1 of ‘arraytest’ from
// incompatible pointer type [-Wincompatible-pointer-types]
// main.c:22:6: note: expected ‘int (*)[2]’ but argument is of type ‘int *’
arraytest(array1); // (missing & operator)
You may test this code here.
To force the C compiler to turn this warning into an error, so that you MUST always call arraytest(&array1);
using only an input array of the corrrect size and type (int array1[2];
in this case), add -Werror
to your build options. If running the test code above on onlinegdb.com, do this by clicking the gear icon in the top-right and click on "Extra Compiler Flags" to type this option in. Now, this warning:
main.c:34:15: warning: passing argument 1 of ‘arraytest’ from incompatible pointer type [-Wincompatible-pointer-types] main.c:24:6: note: expected ‘int (*)[2]’ but argument is of type ‘int *’
will turn into this build error:
main.c: In function ‘main’: main.c:34:15: error: passing argument 1 of ‘arraytest’ from incompatible pointer type [-Werror=incompatible-pointer-types] arraytest(array1); // warning! ^~~~~~ main.c:24:6: note: expected ‘int (*)[2]’ but argument is of type ‘int *’ void arraytest(int (*a)[2]) ^~~~~~~~~ cc1: all warnings being treated as errors
Note that you can also create "type safe" pointers to arrays of a given size, like this:
int array[2];
// "type safe" ptr to array of size 2 of int:
int (*array_p)[2] = &array;
...but I do NOT necessarily recommend this (using these "type safe" arrays in C), as it reminds me a lot of the C++ antics used to force type safety everywhere, at the exceptionally high cost of language syntax complexity, verbosity, and difficulty architecting code, and which I dislike and have ranted about many times before (ex: see "My Thoughts on C++" here).
For additional tests and experimentation, see also the link just below.
See links above. Also:
To be totally exhaustive, things are different if you're using a JPA 1.0 or a JPA 2.0 implementation.
With JPA 1.0, you'd have to use EntityManager#getDelegate()
. But keep in mind that the result of this method is implementation specific i.e. non portable from application server using Hibernate to the other. For example with JBoss you would do:
org.hibernate.Session session = (Session) manager.getDelegate();
But with GlassFish, you'd have to do:
org.hibernate.Session session = ((org.hibernate.ejb.EntityManagerImpl) em.getDelegate()).getSession();
I agree, that's horrible, and the spec is to blame here (not clear enough).
With JPA 2.0, there is a new (and much better) EntityManager#unwrap(Class<T>)
method that is to be preferred over EntityManager#getDelegate()
for new applications.
So with Hibernate as JPA 2.0 implementation (see 3.15. Native Hibernate API), you would do:
Session session = entityManager.unwrap(Session.class);
I had the command correct per above answers, what I missed on was on the Workbench, where we mention 'Limit Connectivity from Host' for the user, it defaults to "%" - change this to "localhost" and it connects fine thereafter!
This is a simple solution which solved the problem for me:
label
{
/* for firefox */
vertical-align:middle;
/*for internet explorer */
*bottom:3px;
*position:relative;
padding-bottom:7px;
}
The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content-Type headers should not be changed and be followed. This allows to opt-out of MIME type sniffing, or, in other words, it is a way to say that the webmasters knew what they were doing.
Syntax :
X-Content-Type-Options: nosniff
Directives :
nosniff Blocks a request if the requested type is 1. "style" and the MIME type is not "text/css", or 2. "script" and the MIME type is not a JavaScript MIME type.
Note: nosniff only applies to "script" and "style" types. Also applying nosniff to images turned out to be incompatible with existing web sites.
Specification :
https://fetch.spec.whatwg.org/#x-content-type-options-header